Software Development Exam  >  Software Development Notes  >  Hadoop Tutorials: Brief Introduction  >  13 Big Limitations of Hadoop & Solution To Hadoop Drawbacks

13 Big Limitations of Hadoop & Solution To Hadoop Drawbacks | Hadoop Tutorials: Brief Introduction - Software Development PDF Download

1. Limitations of Hadoop Article – Objective

Although Hadoop is the most powerful tool of big data, there are various limitations of Hadoop like Hadoop is not suited for small files, it cannot handle firmly the live data, slow processing speed, not efficient for iterative processing, not efficient for caching etc.

In this tutorial on limitations of Hadoop, firstly we will learn about what is Hadoop and what are the pros and cons of Hadoop. We will see features of Hadoop due to which it is so popular. We will also see 13 Big Disadvantages of Hadoop due to which Apache Spark and Apache Flink came into existence. We will learn about various ways to overcome the drawbacks of Hadoop.


2. Hadoop – Introduction & Features

Let us start with what is Hadoop and what are Hadoop features that make it so popular.

Hadoop is an open-source software framework for distributed storage and distributed processing of extremely large data sets. Important features of Hadoop are:

  • Apache Hadoop is an open source project. It means one can modify its code to business requirements.
  • In Hadoop, data is highly available and accessible despite hardware failure due to multiple copies of data. If a machine or any hardware crashes, then data will be accessed from another path.
  • Hadoop is highly scalable, as the new hardware can be easily added to the node. Hadoop also provides horizontal scalability which means nodes can be added on the fly without any downtime.
  • Hadoop is fault tolerant, as by default 3 replicas of each block is stored across the cluster. So if any node goes down, data on that node can be recovered from the other node easily.
  • In Hadoop, data is reliably stored on the cluster despite machine failure due to replication of data on the cluster.
  • Hadoop runs on a cluster of commodity hardware which is not very expensive.
  • Hadoop is very easy to use, as there is no need of client to deal with distributed computing; the framework takes care of all the things.

But as all technologies have pros and cons, similarly there are many limitations of Hadoop as well. As we have already seen features and advantages of Hadoop above, now let us see the limitations of Hadoop, due to which Apache Spark and Apache Flink came into picture.


3. Big Limitations of Hadoop for Big Data Analytics

Various limitations of Hadoop are discussed below in this section along with their solution-

3.1. Issue with Small Files

Hadoop is not suited for small data. (HDFS) Hadoop distributed file system lacks the ability to efficiently support the random reading of small files because of its high capacity design.

Small files are the major problem in HDFS. A small file is significantly smaller than the HDFS block size (default 128MB). If we are storing these huge numbers of small files, HDFS can’t handle these lots of files, as HDFS was designed to work properly with a small number of large files for storing large data sets rather than a large number of small files. If there are too many small files, then the NameNode will be overloaded since it stores the namespace of HDFS.

Solution-

  • Solution to this Drawback of Hadoop to deal with small file issue is simple. Just merge the small files to create bigger files and then copy bigger files to HDFS.
  • HAR files (Hadoop Archives) were introduced to reduce the problem of lots files putting pressure on the namenode’s memory. By building a layered filesystem on the top of HDFS, HAR files works. Using Hadoop archive command, HAR files are created, which runs a MapReduce job to pack the files being archived into a small number of HDFS files. Reading through files in a HAR is not more efficient than reading through files in HDFS. Since each HAR file access requires two index files read as well the data file to read, this makes it slower.
  • Sequence files work very well in practice to overcome the ‘small file problem’, in which we use the filename as the key and the file contents as the value. By writing a program for files (100 KB), we can put them into a single Sequence file and then we can process them in a streaming fashion operating on the Sequence file. MapReduce can break Sequence file into chunks and operate on each chunk independently because Sequence file is splittable.
  • Storing files in HBase is a very common design pattern to overcome small file problem with HDFS. We are not actually storing millions of small files into HBase, rather adding the binary content of the file to a cell.

3.2. Slow Processing Speed

In Hadoop, with a parallel and distributed algorithm, MapReduce process large data sets. There are tasks that need to be performed: Map and Reduce and, MapReduce requires a lot of time to perform these tasks thereby increasing latency. Data is distributed and processed over the cluster in MapReduce which increases the time and reduces processing speed.

Solution-

As a Solution to this Limitation of Hadoop spark has overcome this issue, by in-memory processing of data. In-memory processing is faster as no time is spent in moving the data/processes in and out of the disk. Spark is 100 times faster than MapReduce as it processes everything in memory. Flink is also used, as it processes faster than spark because of its streaming architecture and Flink may be instructed to process only the parts of the data that have actually changed, thus significantly increases the performance of the job.


3.3. Support for Batch Processing only

Hadoop supports batch processing only, it does not process streamed data, and hence overall performance is slower. MapReduce framework of Hadoop does not leverage the memory of the Hadoop cluster to the maximum.

Solution-

To solve these limitations of Hadoop spark is used that improves the performance, but Spark stream processing is not as much efficient as Flink as it uses micro-batch processing. Flink improves the overall performance as it provides single run-time for the streaming as well as batch processing. Flink uses native closed loop iteration operators which make machine learning and graph processing faster.


3.4. No Real-time Data Processing

Apache Hadoop is designed for batch processing, that means it take a huge amount of data in input, process it and produce the result. Although batch processing is very efficient for processing a high volume of data, but depending on the size of the data being processed and computational power of the system, an output can be delayed significantly. Hadoop is not suitable for Real-time data processing.

Solution-

  • Apache Spark supports stream processing. Stream processing involves continuous input and output of data. It emphasizes on the velocity of the data, and data is processed within a small period of time. Learn more about Spark Streaming APIs.
  • Apache Flink provides single run-time for the streaming as well as batch processing, so one common run-time is utilized for data streaming application and batch processing application. Flink is a stream processing system that is able to process row after row in real time.

3.5. No Delta Iteration

Hadoop is not so efficient for iterative processing, as Hadoop does not support cyclic data flow(i.e. a chain of stages in which each output of the previous stage is the input to the next stage).

Solution-

Apache Spark can be used to overcome this type of Limitations of Hadoop, as it accesses data from RAM instead of disk, which dramatically improves the performance of iterative algorithms that access the same dataset repeatedly. Spark iterates its data in batches. For iterative processing in Spark, each iteration has to be scheduled and executed separately.


3.6. Latency

In Hadoop, MapReduce framework is comparatively slower, since it is designed to support different format, structure and huge volume of data. In MapReduce, Map takes a set of data and converts it into another set of data, where individual element are broken down into key value pair and Reduce takes the output from the map as input and process further and MapReduce requires a lot of time to perform these tasks thereby increasing latency.

Solution-

Spark is used to reduce this limitation of Hadoop, Apache spark is yet another batch system but it is relatively faster since it caches much of the input data on memory by RDD(Resilient Distributed Dataset) and keeps intermediate data in memory itself. Flink’s data streaming achieves low latency and high throughput.

Refer this guide to learn how to create RDD in Apache Spark.


3.7. Not Easy to Use

In Hadoop, MapReduce developers need to hand code for each and every operation which makes it very difficult to work. MapReduce has no interactive mode, but adding one such as hive and pig makes working with MapReduce a little easier for adopters.

Solution-

To solve this Drawback of Hadoop, we can use spark. Spark has interactive mode so that developers and users alike can have intermediate feedback for queries and other action. Spark is easy to program as it has tons of high-level operators. Flink can also be easily used as it also has high-level operators. This way spark can solve many limitations of Hadoop.


3.8. Security

Hadoop can be challenging in managing the complex application. If the user doesn’t know how to enable platform who is managing the platform, your data could be at huge risk. At storage and network levels, Hadoop is missing encryption, which is a major point of concern. Hadoop supports Kerberos authentication, which is hard to manage.

HDFS supports access control lists (ACLs) and a traditional file permissions model. However, third party vendors have enabled an organization to leverage Active Directory Kerberos and LDAP for authentication.

Solution-

Spark provides security bonus to overcome these limitations of Hadoop. If we run spark in HDFS, it can use HDFS ACLs and file-level permissions. Additionally, Spark can run on YARN giving it the capability of using Kerberos authentication.


3.9. No Abstraction

Hadoop does not have any type of abstraction so MapReduce developers need to hand code for each and every operation which makes it very difficult to work.

Solution-

To overcome these Drawback of Hadoop, Spark is used in which we have RDD abstraction for batch. Flink has Dataset abstraction.


3.10. Vulnerable by Nature

Hadoop is entirely written in java, a language most widely used, hence java been most heavily exploited by cyber criminals and as a result, implicated in numerous security breaches.


3.11. No Caching

Hadoop is not efficient for caching. In Hadoop, MapReduce cannot cache the intermediate data in memory for a further requirement which diminishes the performance of Hadoop.

Solution-

Spark and Flink can overcome this limitation of hadoop, as Spark and Flink cache data in memory for further iterations which enhance the overall performance.


3.12. Lengthy Line of Code

Hadoop has 1,20,000 line of code, the number of lines produces the number of bugs and it will take more time to execute the program.

Solution-

Although Spark and Flink are written in scala and java but they are implemented in Scala, so the number of line of code is lesser than Hadoop. So it will also take less time to execute the program and solve the lenthy line of code limitations of Hadoop.

To learn Scala get Best Scala books to become a master in Scala.


3.13. Uncertainty

Hadoop only ensures that data job is complete, but it’s unable to guarantee when the job will be complete.


4. Limitations of Hadoop and Its solutions – Conclusion

As a result of Limitations of Hadoop, the need of Spark and Flink emerged. Thus made the system more friendly to play with a huge amount of data. Spark provides in-memory processing of data thus improves the processing speed. Flink improves the overall performance as it provides single run-time for the streaming as well as batch processing. Spark provides security bonus.

The document 13 Big Limitations of Hadoop & Solution To Hadoop Drawbacks | Hadoop Tutorials: Brief Introduction - Software Development is a part of the Software Development Course Hadoop Tutorials: Brief Introduction.
All you need of Software Development at this link: Software Development
1 videos|14 docs

Top Courses for Software Development

FAQs on 13 Big Limitations of Hadoop & Solution To Hadoop Drawbacks - Hadoop Tutorials: Brief Introduction - Software Development

1. What are the limitations of Hadoop?
Ans. Some of the limitations of Hadoop are: 1. Scalability: Hadoop has limitations in handling large-scale data processing efficiently. 2. Complexity: Hadoop requires specialized skills and expertise to set up and maintain. 3. Real-time Processing: Hadoop is not suitable for real-time processing as it has high latency. 4. Single Point of Failure: Hadoop's NameNode acts as a single point of failure, leading to potential data loss. 5. Data Locality: Hadoop's performance heavily relies on data locality, which can be a challenge in some cases.
2. How can the scalability limitation of Hadoop be addressed?
Ans. The scalability limitation of Hadoop can be addressed by: 1. Using Cluster Management Tools: Tools like Apache Mesos and Kubernetes can help manage and scale Hadoop clusters more efficiently. 2. Distributed File Systems: Implementing distributed file systems like HDFS or Ceph can enhance Hadoop's scalability. 3. Adding More Nodes: Increasing the number of nodes in the Hadoop cluster can improve its scalability. 4. Using Complementary Technologies: Integrating Hadoop with other technologies like Apache Spark or Apache Flink can provide better scalability for specific use cases.
3. What are the drawbacks of Hadoop's complexity?
Ans. The drawbacks of Hadoop's complexity include: 1. Skill Requirement: Setting up and maintaining Hadoop requires specialized skills and expertise, which can be time-consuming and costly. 2. Steep Learning Curve: Learning Hadoop and its associated technologies can be challenging for individuals or teams with limited experience in distributed systems. 3. Infrastructure Management: Managing the infrastructure required for Hadoop, such as hardware, networking, and storage, adds complexity and cost to the overall system.
4. What are the alternatives to Hadoop for real-time processing?
Ans. Some alternatives to Hadoop for real-time processing are: 1. Apache Storm: Apache Storm is a real-time stream processing system that can process large volumes of data in real-time. 2. Apache Flink: Apache Flink is a distributed stream processing framework that provides low-latency and high-throughput processing capabilities. 3. Apache Kafka: Apache Kafka is a distributed event streaming platform that enables real-time data processing and messaging.
5. How can the single point of failure issue in Hadoop be mitigated?
Ans. The single point of failure issue in Hadoop can be mitigated by: 1. Implementing High Availability: Enabling Hadoop's High Availability (HA) feature ensures that a standby NameNode is available to take over in case of a failure. 2. Regular Backups: Taking regular backups of the Hadoop cluster's metadata and data can help recover from potential failures. 3. Distributed Storage: Using distributed storage systems like HDFS or object storage can provide redundancy and reduce the impact of a single point of failure. 4. Monitoring and Alerting: Implementing robust monitoring and alerting systems can help identify and address potential failures in a timely manner.
1 videos|14 docs
Download as PDF
Explore Courses for Software Development exam

Top Courses for Software Development

Signup for Free!
Signup to see your scores go up within 7 days! Learn & Practice with 1000+ FREE Notes, Videos & Tests.
10M+ students study on EduRev
Related Searches

past year papers

,

Free

,

13 Big Limitations of Hadoop & Solution To Hadoop Drawbacks | Hadoop Tutorials: Brief Introduction - Software Development

,

13 Big Limitations of Hadoop & Solution To Hadoop Drawbacks | Hadoop Tutorials: Brief Introduction - Software Development

,

MCQs

,

practice quizzes

,

Semester Notes

,

Objective type Questions

,

Summary

,

Important questions

,

study material

,

Previous Year Questions with Solutions

,

Sample Paper

,

pdf

,

video lectures

,

Exam

,

mock tests for examination

,

Extra Questions

,

Viva Questions

,

ppt

,

shortcuts and tricks

,

13 Big Limitations of Hadoop & Solution To Hadoop Drawbacks | Hadoop Tutorials: Brief Introduction - Software Development

;