Xaras Data Wizardry

HadoopHadoopBeginner
Practice Now

Introduction

In the magical desert kingdom of Xara, the wise and benevolent King Amir sought to harness the power of the vast data resources scattered across his realm. He summoned his most skilled data wizards to devise a system that could collect, process, and analyze the kingdom's data, unlocking insights to aid in decision-making and prosperity for all.

The goal was to create a robust and scalable data platform that could integrate with the Hadoop Distributed File System (HDFS) and leverage the power of MapReduce for efficient data processing. This platform would enable the kingdom to analyze data from various sources, such as trade records, agricultural yields, and census information, empowering King Amir to make informed decisions for the betterment of his subjects.


Skills Graph

%%%%{init: {'theme':'neutral'}}%%%% flowchart RL hadoop(("`Hadoop`")) -.-> hadoop/HadoopHiveGroup(["`Hadoop Hive`"]) hadoop/HadoopHiveGroup -.-> hadoop/integration("`Integration with HDFS and MapReduce`") subgraph Lab Skills hadoop/integration -.-> lab-288981{{"`Xaras Data Wizardry`"}} end

Exploring the Hadoop Ecosystem

In this step, you will familiarize yourself with the Hadoop ecosystem and its core components: HDFS and MapReduce.

Hadoop is an open-source framework for distributed storage and processing of large datasets across clusters of commodity hardware. It consists of two main components:

  1. Hadoop Distributed File System (HDFS): A distributed file system designed to store large files across multiple machines, providing fault tolerance and high throughput access to data.

  2. MapReduce: A programming model and software framework for writing applications that process large amounts of data in parallel across a cluster of machines.

To explore the Hadoop ecosystem, you can use the following commands in your terminal:

First, ensure you are logged in as the hadoop user by running the following command in the terminal:

su - hadoop

List the directories and files in HDFS:

hdfs dfs -ls /

Create a new directory in HDFS:

hdfs dfs -mkdir -p /home/hadoop/input

Copy a local file to HDFS:

hdfs dfs -put /home/hadoop/local/file.txt /home/hadoop/input

These commands demonstrate how to interact with HDFS, listing its contents, creating directories, and copying files from your local file system.

The hdfs dfs command is used to interact with the Hadoop Distributed File System (HDFS). The -ls option lists the contents of a directory in HDFS, while -mkdir creates a new directory. The -put option copies a local file to HDFS.

Running a MapReduce Job

In this step, you will learn how to run a MapReduce job on the data stored in HDFS, leveraging the power of parallel processing to analyze large datasets efficiently.

MapReduce is a programming model for processing large datasets in parallel across a cluster of machines. It consists of two main phases:

  1. Map: The input data is split into smaller chunks, and each chunk is processed by a separate task called a "mapper." The mapper processes the data and emits key-value pairs.

  2. Reduce: The output from the mappers is sorted and grouped by key, and each group is processed by a separate task called a "reducer." The reducer combines the values associated with each key and produces the final result.

Let's run a simple MapReduce job that counts the occurrences of words in a text file. First, create a Java file named WordCount.java with the following content:

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class WordCount {

    public static class TokenizerMapper
            extends Mapper<Object, Text, Text, IntWritable>{

        private final static IntWritable one = new IntWritable(1);
        private Text word = new Text();

        public void map(Object key, Text value, Context context
        ) throws IOException, InterruptedException {
            StringTokenizer itr = new StringTokenizer(value.toString());
            while (itr.hasMoreTokens()) {
                word.set(itr.nextToken());
                context.write(word, one);
            }
        }
    }

    public static class IntSumReducer
            extends Reducer<Text,IntWritable,Text,IntWritable> {
        private IntWritable result = new IntWritable();

        public void reduce(Text key, Iterable<IntWritable> values,
                           Context context
        ) throws IOException, InterruptedException {
            int sum = 0;
            for (IntWritable val : values) {
                sum += val.get();
            }
            result.set(sum);
            context.write(key, result);
        }
    }

    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf, "word count");
        job.setJarByClass(WordCount.class);
        job.setMapperClass(TokenizerMapper.class);
        job.setCombinerClass(IntSumReducer.class);
        job.setReducerClass(IntSumReducer.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);
        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));
        System.exit(job.waitForCompletion(true) ? 0 : 1);
    }
}

Next, compile the Java file:

mkdir ~/wordcount
javac -source 8 -target 8 -classpath $(hadoop classpath) -d ~/wordcount WordCount.java
jar -cvf ~/wordcount.jar -C ~/wordcount .

Finally, run the MapReduce job:

hadoop jar ~/wordcount.jar WordCount /home/hadoop/input/file.txt /home/hadoop/output

The WordCount class defines a MapReduce job that counts the occurrences of words in a text file. The TokenizerMapper class tokenizes each line of input text and emits (word, 1) key-value pairs. The IntSumReducer class sums up the values (counts) for each word and emits the final (word, count) pairs.

The Java file is compiled and packaged into a JAR file, which is then executed using the hadoop jar command. The input file path (/home/hadoop/input/file.txt) and output directory path (/home/hadoop/output) are provided as arguments.

Summary

In this lab, you embarked on a journey to the magical desert kingdom of Xara, where you assisted King Amir in harnessing the power of the Hadoop ecosystem to process and analyze the kingdom's data. You explored the core components of Hadoop, including HDFS for distributed storage and MapReduce for parallel data processing.

Through hands-on steps, you learned how to interact with HDFS, create directories, and upload files. You also gained experience in running a MapReduce job, specifically a word count application, which demonstrated the parallel processing capabilities of Hadoop.

By completing this lab, you have acquired valuable skills in integrating Hadoop with HDFS and MapReduce, equipping you with the knowledge to tackle real-world big data challenges. This experience will undoubtedly contribute to your growth as a data wizard, empowering you to unlock insights and drive decision-making in various domains.

Other Hadoop Tutorials you may like