Hadoop0.20.2的安装与配置
<name>dfs.name.dir</name>?????????? //DFS 文件目录名字 ?
<value>/usr/local/hadoop/filesystem/name</value>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
? <value>hdfs-m:9001</value>??? ?// 此处应该填写 NameNode 的别名
? <description>The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task.</description>
</property>
</configuration>
五、部署 Hadoop
前面讲的这么多 Hadoop 的环境变量和配置文件都是在 hdfs-m 这台机器上的,现在需要将 hadoop 部署到其他的机器上,保证目录结构一致。
$scp? -r? /usr/local/hadoop? hdfs-s1:/usr/local/
$scp? -r? /usr/local/hadoop? hdfs-s2:/usr/local/
至此,可以说, Hadoop 已经在各个机器上部署完毕了下面就让我们开始启动 Hadoop 吧。
六、格式化 Hadoop
启动之前,我们先要格式化 namenode ,先进入 /usr/local/hadoop 目录,执行下面的命令
$bin/hadoop? namenode? -format
10/10/18 00:12:48 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:?? host = ubuntu/127.0.1.1
STARTUP_MSG:?? args = [-format]
STARTUP_MSG:?? version = 0.20.2
STARTUP_MSG:?? build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
Re-format filesystem in /usr/local/hadoop/filesystem/name ? (Y or N) Y
10/10/18 00:12:53 INFO namenode.FSNamesystem: fsOwner=root,root,adm,dialout,fax,cdrom,tape,audio,dip,video,plugdev,fuse,lpadmin,netdev,admin,sambashare
10/10/18 00:12:53 INFO namenode.FSNamesystem: supergroup=supergroup
10/10/18 00:12:53 INFO namenode.FSNamesystem: isPermissionEnabled=true
10/10/18 00:12:53 INFO common.Storage: Image file of size 94 saved in 0 seconds.
10/10/18 00:12:53 INFO common.Storage: Storage directory /usr/local/hadoop/filesystem/name has been successfully formatted.
10/10/18 00:12:53 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1
************************************************************/
不出意外,应该会提示格式化成功。如果不成功,就去 hadoop/logs/ 目录下去查看日志文件 . 注意这里的交互式问题一定要输入大写的 Y 。
七、启动 Hadoop
下面就该正式启动 hadoop 啦,在 bin/ 下面有很多启动脚本,可以根据自己的需要来启动。
* start-all.sh 启动所有的 Hadoop 守护。包括 namenode, datanode, jobtracker, tasktrack
* stop-all.sh 停止所有的 Hadoop
* start-mapred.sh 启动 Map/Reduce 守护。包括 Jobtracker 和 Tasktrack
* stop-mapred.sh 停止 Map/Reduce 守护
* start-dfs.sh 启动 Hadoop DFS 守护 .Namenode 和 Datanode
* stop-dfs.sh 停止 DFS 守护
?
在格式化完成后,即可以启动 hadoop.
root@ubuntu:/usr/local/hadoop/bin# ./start-all.sh
starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-ubuntu.out
hdfs-s1: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-ubuntu.out
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-ubuntu.out
hdfs-s2: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-ubuntu.out
localhost: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-root-secondarynamenode-ubuntu.out
starting jobtracker, logging to /usr/local/hadoop/logs/hadoop-root-jobtracker-ubuntu.out
hdfs-s1: starting tasktracker, logging to /usr/local/hadoop/logs/hadoop-root-tasktracker-ubuntu.out
localhost: starting tasktracker, logging to /usr/local/hadoop/logs/hadoop-root-tasktracker-ubuntu.out
hdfs-s2: starting tasktracker, logging to /usr/local/hadoop/logs/hadoop-root-tasktracker-ubuntu.out
在成功启动后,在主服务器上通过如下地址进行访问:
http://hdfs-m:50070 即可以查看当前有几个数据结点,它们的状态如何。
http://hdfs-m:50030 即可以查看当前的 job 及 Task 的工作状态。