首页 诗词 字典 板报 句子 名言 友答 励志 学校 网站地图
当前位置: 首页 > 教程频道 > 其他教程 > 操作系统 >

Centos5.6停安装Hadoop完全分布式模式

2012-08-14 
Centos5.6下安装Hadoop完全分布式模式实验环境系统 : centos 5.6 JDK ?:jdk-6u26-linux-i586-rpm.bin 账号

Centos5.6下安装Hadoop完全分布式模式
实验环境

    系统 : centos 5.6 JDK ?: jdk-6u26-linux-i586-rpm.bin 账号 : hadoop 目录 : /usr/local/hadoop 主机名 :master slave1 slave2
实验目的

组建三台机器的群集

master: 部署namecode,JobTracker,DataNode,TaskTracker slave1: 部署JobTracker,DataNode,TaskTracker slave2: 部署JobTracker,DataNode,TaskTracker

其实这个不是最好的组建方法。实验为了更好的测试多节点而这样设置。

安装 请确保每台机器都安装了sun jdk 将hadoop安装在相同的目录(/usr/local/hadoop)。 请确保hadoop/conf/hadoop-env.sh 中JAVA_HOME=/usr/java/jdk1.6.0_26 设置并且正确。 每台机器创建hadoop账户。
 #useradd hadoop #passwd hadoop
确保master能无密码登录。
 $ssh-keygent -t dsa (我将密码设置为空方便测试。正常环境请安装keychain,keychain安装) $cd .ssh $cat cat id_rsa.pub > authorized_keys $chmod 600 authorized_keys  (将权限设置为600否者ssh将不读取公钥信息)
分发public key
 $ssh-copy-id slave1 $ssh-copy-id slave2
配置文件概述 NameNode : core-site.xmlJobTracker : mapred-site.xmlDataNode : hdfs-site.xmlmaster : mastersslave : slaves 配置 编辑namenode的配置文件
  $vi core-site.xml  <configuration>     <property>       <name>fs.default.name</name>       <value>hdfs://192.168.60.149:9000/</value>        </property>     <property>       <name>hadoop.tmp.dir</name>        <value>/usr/local/hadoop/hadooptmp</value>     </property>  </configuration>
编辑JobTracker的配置
  $vi mapred-site.xml     <configuration>    <property>       <name>mapred.job.tracker</name>          <value>192.168.60.149:9001</value>    </property>    <property>       <name>mapred.local.dir</name>       <value>/usr/local/hadoop/mapred/local</value>    </property>    <property>       <name>mapred.system.dir</name>       <value>/tmp/hadoop/mapred/system</value>    </property>  </configuration>
编辑DataNode配置
  $vi hdfs-site.xml  <configuration>  <property>       <name>dfs.name.dir</name>       <value>/usr/local/hadoop/hdfs/name</value>    </property>    <property>       <name>dfs.data.dir</name>       <value>/usr/local/hadoop/hdfs/data</value>    </property>    <property>       <name>dfs.replication</name>          <value>3</value>    </property>  </configuration>
修改slave1,slave2的配置 修改slave1,slave2的JobTracker的配置
  $vi mapred-site.xml     <configuration>    <property>       <name>mapred.job.tracker</name>          <value>192.168.60.149:9001</value>    </property>    <property>       <name>mapred.local.dir</name>       <value>/usr/local/hadoop/mapred/local</value>    </property>    <property>       <name>mapred.system.dir</name>       <value>/tmp/hadoop/mapred/system</value>    </property>  </configuration>
修改slave1,slave2的DataNode配置
  $vi hdfs-site.xml  <configuration>  <property>       <name>dfs.name.dir</name>       <value>/usr/local/hadoop/hdfs/name</value>    </property>    <property>       <name>dfs.data.dir</name>       <value>/usr/local/hadoop/hdfs/data</value>    </property>    <property>       <name>dfs.replication</name>          <value>3</value>    </property>  </configuration>
设置master
  $vi masters  master
设置slave
  $vi slaves  master  slave1  slave2
运行 格式化namenode
  $$bin/hadoop namenode -format
启动所有进程
  $/usr/local/hadoop/bin/start-all.sh  starting namenode, logging to /usr/local/hadoop/bin/../logs/hadoop-yueyang-namenode-master.out  master: starting datanode, logging to /usr/local/hadoop/bin/../logs/hadoop-yueyang-datanode-master.out  slave2: starting datanode, logging to /usr/local/hadoop/bin/../logs/hadoop-yueyang-datanode-slave2.out  slave1: starting datanode, logging to /usr/local/hadoop/bin/../logs/hadoop-yueyang-datanode-slave1.out  master: starting secondarynamenode, logging to /usr/local/hadoop/bin/../logs/hadoop-yueyang-secondarynamenode-master.out  starting jobtracker, logging to /usr/local/hadoop/bin/../logs/hadoop-yueyang-jobtracker-master.out  slave1: starting tasktracker, logging to /usr/local/hadoop/bin/../logs/hadoop-yueyang-tasktracker-slave1.out  slave2: starting tasktracker, logging to /usr/local/hadoop/bin/../logs/hadoop-yueyang-tasktracker-slave2.out  master: starting tasktracker, logging to /usr/local/hadoop/bin/../logs/hadoop-yueyang-tasktracker-master.out
测试 分布式文件系统测试 查看 http://master:50030 nodes里面是3证明三个节点正常接入 创建测试pustest文件夹用于分布式文件系统测试
  $bin/hadoop dfs -mkdir pustest
将conf/hadoop-env.sh放到pushtest目录 用于测试。
  $bin/hadoop dfs -put conf/hadoop-env.sh pushtest
http://master:50070 Browse the filesystem 发现跳转slave1 or slave2 证明分布式文件系统正常。 hadoop默认开放web状态展示访问地址为
  http://localhost:50070;  http://localhost:50030;
简单的daemo
hadoop自带一些简单的实例。测试下单词统计功能。  $bin/hadoop jar hadoop-examples-0.20.203.0.jar wordcount pushtest testoutput运行后将可以在web界面看见job的状态。和完成的状态。具体单词数量等统计结果要查看  $bin/hadoop fs -ls  drwxr-xr-x   - hadoop supergroup          0 2011-07-11 11:13 /user/hadoop/test  drwxr-xr-x   - hadoop supergroup          0 2011-07-11 11:15 /user/hadoot/testoutput  $bin/hadoop fs -ls testoutput  Found 3 items  -rw-r--r--   1 hadoop supergroup          0 2011-07-11 16:31 /user/hadoop/shanyang1/_SUCCESS  drwxr-xr-x   - hadoop supergroup          0 2011-07-11 16:30 /user/hadoop/shanyang1/_logs  -rw-r--r--   1 hadoop supergroup      32897 2011-07-11 16:31 /user/hadoop/shanyang1/part-r-00000  $bin/hadoop fs -cat /user/hadoop/shanyang1/part-r-00000   将可以看到详细的统计信息

热点排行