首页 诗词 字典 板报 句子 名言 友答 励志 学校 网站地图
当前位置: 首页 > 教程频道 > 其他教程 > 其他相关 >

hadoop平台运作WordCount程序

2012-11-26 
hadoop平台运行WordCount程序1. 经典的WordCound程序(WordCount.java)javac -classpath /home/admin/hadoo

hadoop平台运行WordCount程序

   1. 经典的WordCound程序(WordCount.java)

javac -classpath /home/admin/hadoop/hadoop-0.19.1-core.jar WordCount.java -d /home/admin/WordCount

3. 编译完后在/home/admin/WordCount目录会发现三个class文件 WordCount.class,WordCount$Map.class,WordCount$Reduce.class。
  cd 进入 /home/admin/WordCount目录,然后执行:

jar cvf WordCount.jar *.class

  就会生成 WordCount.jar 文件。

  4. 构造一些输入数据
  input1.txt和input2.txt的文件里面是一些单词。如下:

[admin@host WordCount]$ cat input1.txt
Hello, i love china
are you ok?
[admin@host WordCount]$ cat input2.txt
hello, i love word
You are ok

  在hadoop上新建目录,和put程序运行所需要的输入文件:

hadoop fs -mkdir /tmp/input
hadoop fs -mkdir /tmp/output
hadoop fs -put input1.txt /tmp/input/
hadoop fs -put input2.txt /tmp/input/

  5. 运行程序,会显示job运行时的一些信息。

hadoop平台运作WordCount程序[admin@host WordCount]$ hadoop jar WordCount.jar WordCount /tmp/input /tmp/output
10/09/16 22:49:43 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
10/09/16 22:49:43 INFO mapred.FileInputFormat: Total input paths to process :2
10/09/16 22:49:43 INFO mapred.JobClient: Running job: job_201008171228_76165
10/09/16 22:49:44 INFO mapred.JobClient: map 0% reduce 0%
10/09/16 22:49:47 INFO mapred.JobClient: map 100% reduce 0%
10/09/16 22:49:54 INFO mapred.JobClient: map 100% reduce 100%
10/09/16 22:49:55 INFO mapred.JobClient: Job complete: job_201008171228_76165
10/09/16 22:49:55 INFO mapred.JobClient: Counters: 16
10/09/16 22:49:55 INFO mapred.JobClient: File Systems
10/09/16 22:49:55 INFO mapred.JobClient: HDFS bytes read=62
10/09/16 22:49:55 INFO mapred.JobClient: HDFS bytes written=73
10/09/16 22:49:55 INFO mapred.JobClient: Local bytes read=152
10/09/16 22:49:55 INFO mapred.JobClient: Local bytes written=366
10/09/16 22:49:55 INFO mapred.JobClient: Job Counters
10/09/16 22:49:55 INFO mapred.JobClient: Launched reduce tasks=1
10/09/16 22:49:55 INFO mapred.JobClient: Rack-local map tasks=2
10/09/16 22:49:55 INFO mapred.JobClient: Launched map tasks=2
10/09/16 22:49:55 INFO mapred.JobClient: Map-Reduce Framework
10/09/16 22:49:55 INFO mapred.JobClient: Reduce input groups=11
10/09/16 22:49:55 INFO mapred.JobClient: Combine output records=14
10/09/16 22:49:55 INFO mapred.JobClient: Map input records=4
10/09/16 22:49:55 INFO mapred.JobClient: Reduce output records=11
10/09/16 22:49:55 INFO mapred.JobClient: Map output bytes=118
10/09/16 22:49:55 INFO mapred.JobClient: Map input bytes=62
10/09/16 22:49:55 INFO mapred.JobClient: Combine input records=14
10/09/16 22:49:55 INFO mapred.JobClient: Map output records=14
10/09/16 22:49:55 INFO mapred.JobClient: Reduce input records=14hadoop平台运作WordCount程序

  6. 查看运行结果

hadoop平台运作WordCount程序[admin@host WordCount]$ hadoop fs -ls /tmp/output/
Found 2 items
drwxr-x--- - admin admin 0 2010-09-16 22:43 /tmp/output/_logs
-rw-r----- 1 admin admin 102 2010-09-16 22:44 /tmp/output/part-00000
[admin@host WordCount]$ hadoop fs -cat /tmp/output/part-00000
Hello, 1
You 1
are 2
china 1
hello, 1
i 2
love 2
ok 1
ok? 1
word 1
you 1hadoop平台运作WordCount程序
其中可能出现的问题

1:java.io.FileNotFoundException

   这个异常是因为目录创建上有问题,于是重新检查了下目录,发现自己弄成/opt/hadoop/tmp/inout。而是/tmp/input

2:org.apache.hadoop.mapred.FileAlreadyExistsException

   这个异常主要是因为上一个导致的,因为hadoop 由于进行的是耗费资源的计算,生产的结果默认是不能被覆盖的,因此中间结果输出目录一定不能存在,否则出现这个错误。

于是就执行命令删除output文件  /opt/hadoop/bin/hadoop fs -rmr /tmp/output


3:ERROR namenode.NameNode: java.io.IOException: Cannot create directory /usr/local/hadoop-datastore/hadoop-hadoop/dfs/name/current

是因为hadoop-database 文件夹没有获取权限


热点排行