Clear all

Regarding unstructured data handling in hadoop  


Sathish Kumar
Member Moderator
Joined: 4 months ago
Posts: 1203
18/03/2021 11:53 am  
How can we import unstructured and semi-structured data in Hadoop? It is easy to import structured data because I can directly import it from MySQL using Sqoop. But what to do in the case of unstructured data?

Noble Member
Joined: 4 months ago
Posts: 1179
18/03/2021 11:54 am  

There are multiple ways to import unstructured data into Hadoop, depending on your use cases.

1. Using HDFS shell commands such as put or copy from local to move flat files into HDFS. For details, please see File System Shell Guide.

2. Using WebHDFS REST API for application integration. WebHDFS REST API

3. Using Apache Flume. It is a distributed, reliable, and available system for efficiently collecting, aggregating, and moving large amounts of data from many different sources to a centralized data store, such as HDFS. Even though historically lots of use cases of Flume are involved with log data collection/aggregation, Flume can be used together with Kafka and turn itself into a real-time event processing pipeline.

4. Using Storm, a general-purpose, event-processing system. Within a topology composed of bolts and spouts, it can be used to ingest the event-based unstructured data into Hadoop

5. Spark's streaming component offers another alternative to ingesting real-time unstructured data into the HDFS. Its processing model is quite different from Storm though. While Strom process incoming event one at a time, Spark streaming actually batches up events that arrive within a short time window before processing them. It is called a mini-batch. Spark streaming, of course, runs on top of the Spark Core computing engine, which is claimed to be 100x faster than MapReduce in memory and 10x faster on disk.