Monday, 17 August 2015

Differences between Hadoop1.0 & Hadoop 2.0

Early adopters of the Hadoop ecosystem were restricted to processing models that were MapReduce-based only. Hadoop 2 has brought with it effective processing models that lend themselves to many Big Data uses, including interactive SQL queries over big data, analysis of Big Data scale graphs, and scalable machine learning abilities. The evolution of Hadoop 1's limited processing model comprising of various batch-oriented MapReduce tasks, to the more specialized and interactive hard-core models of Hadoop 2 ,have now showcased the potential value contributed by distributed and large scale processing systems. Read on to note the major differences that exist between Hadoop 1 and 2. 

Hadoop--YARN and HDFS 

While other available solutions are likely to be unsuitable for interactive analytics; are I/O intensive; constrained with respect to providing graph support, memory intensive algorithms, and other machine learning processes; and more; Hadoop proves to be far ahead in the race. Creating a reliable, scalable and strong foundation for Big Data architectures, the Hadoop ecosystem has been positioned as one of the most dominant Big Data platforms for analytics. Here, it deserves mention that Hadoop developers had rewritten major components of the Hadoop 1 file system for producing Hadoop 2. The resource manager YARN and HDFS federation were introduced as important advances for Hadoop 2.  

HDFS-- Hadoop file system with a difference

HDFS, a popular Hadoop file system, comprises of two main components: blocks storage service and namespaces. While the block storage service deals with block operations, cluster management of data nodes, and replication; namespaces manage all operations on files/ directories, especially with regards to the creation and modification of files and directories.

A single Namenode was responsible for managing the complete namespace for Hadoop clusters in Hadoop 1. With the advent of the HDFS federation, several Namenode servers are being used for the management of namespaces. This in turn allows for performance improvements, horizontal scaling, and multiple namespaces. All in all, the implementation of HDFS makes existing Namenode configurations operate without changes. A shift to the HDFS federation requires Hadoop administrators to format Namenodes, and update the same for  use with latest Hadoop cluster applications. It also involves the addition of more Namenodes to the Hadoop cluster.

YARN—Supports additional performance enhancements for Hadoop 2

While the HDFS federation is responsible for bringing in measures of reliability and scalability to Hadoop, YARN brings about significant performance enhancements for certain applications; implements an overall more flexible execution engine; and offers support for additional processing models. As a recap, do know that YARN, a resource manager, was developed as a result of the separation of the resource management capabilities and processing engine of MapReduce; as implemented in Hadoop 1.

Oft referred to as the operating system of Hadoop due to its role in managing and monitoring diverse workloads, implementing security controls, maintaining multi-tenant environs, and managing all high availability Hadoop features, YARN is designed for diverse, multiple, user applications that operate on a given multi-tenant platform. In addition to MapReduce, YARN supports other multiple processing models too. 

High Availability Mode (HA) of Namenode

The name node stores all metadata in the Hadoop Cluster. It’s extremely important because in case of events such as an unprecedented machine crash, it can bring down the entire Hadoop cluster. Hadoop 2.0 offers a solution for the problem on hand. Now, the High Availability feature of HDFS comes to the rescue by allowing any of the two redundant name nodes to run in the same cluster. These name nodes may run in any given active/passive way—with one operating as the primary name node, and the other as a hot standby one. 

Both these name nodes share an edits log, wherein all changes are collected in shared NFS storage. At any point of time, only a single writer is allowed to access this shared storage. Here, the passive name node is also allowed access to the storage and is responsible for keeping all updated metadata information with respect to the cluster. If an active name node fails to function, the passive name node takes over as the active one and starts writing onto the shared storage.

Enhanced Utilization of Resources
In case of Hadoop 1.0, the JobTracker held the dual responsibility of driving the accurate  execution of MapReduce jobs, and also managing the resources dedicated to the cluster. With YARN coming to the scene, two major functionalities attributed to the overburdened JobTracker-- job scheduling/monitoring and resource management, are split up into separate daemons. These are:

A Resource Manager (RM) that lays focus upon the management of cluster resources;

An Application Master (AM), which is typically a one-per-running-application that manages individual running applications; for instance, MapReduce jobs. 

It is essential to note that there exists no more non-flexible map-reduce slots. With YARN as the central resource manager, multiple applications  can now share a common resource and run on Hadoop. 

Batch Oriented application
In its 2.0 version, Hadoop goes much beyond its batch oriented nature and runs interactive applications, along with streaming them too. 
Native Windows Support
Originally, Hadoop was developed for supporting the UNIX family that was linked with operating systems. With Hadoop 2.0 that offers native support for the Windows operating system, the reach of Hadoop has extended significantly. It now caters to the ever-growing Windows Server market with flair.

Non MapReduce Applications on Hadoop 2.0
Hadoop 1.0 was compatible with MapReduce framework tasks only; they could process all data stored in HDFS. Other than MapReduce, there were no more models for data processing. For things such as graph or real-time analysis of the data stored in HDFS, users had to shift the data to other alternate storage facilities like HBase. YARN helps Hadoop run non-MapReduce applications too. YARN APIs can be used for writing on other frameworks and running on top of HDFS. This helps the running of different non-MapReduce applications on Hadoop—with MPI, Giraph, Spark, and HAMA being some applications that are well-ported for running within YARN.

Data node caching for faster access
Hadoop 2.0 users and applications of the likes of Pig, Hive, or HBase are capable of identifying different sets of files that require caching. For instance, the dimension tables related to Hive can now be configured for data caches linked to the DataNode RAM; thereby allowing faster reads for Hive related queries to most frequently looked up tables.

HDFS- Multiple Storage

Another important difference between Hadoop 1.0 vs. Hadoop 2.0 is the latter’s support for all kinds of heterogeneous storage. Whether it’s about SSDs or spinning disks, Hadoop 1.0 is known to treat all storage devices as a single uniform pool on a DataNode. So, while Hadoop 1.0 users could store their data on an SSD, they were in no position to control the same. Heterogeneous storage serves to be an integral part of Hadoop’s version of 2.0 and onwards. The approach is quite general and permits users to treat memory as storage tiers for temporary and cached data.

HDFS Snapshots
Hadoop 2.0 offers additional support and compatibility for file system snapshots. They are point-in-time images of complete file system or the sub trees of a specific file system. The many uses of snapshots include:

Protection for user errors: An admin-driven process can be set up for taking snapshots periodically. So, if users happen to delete files accidentally, the lost data is capable of being restored from the snapshots containing the same.
Reliable backups: Snapshots of entire file systems or sub-trees in the file system can be used by the admin as a beginning point for full backups. There’s a scope of taking incremental backups by copying down the differences between any two given snapshots.
Disaster recovery: Snapshots may also be used for the copying of point-in-time images to remotely placed sites for disaster recovery.

No comments:

Post a Comment