hadoop 集群搭建部署
本次hadoop集群部署,利用vmware安装linux系统,并在linux上进行hadoop集群部署测试。
需要用到的软件:
1、VMware? Workstation?9.0.0 build-812388
2、CentOS-6.4-x86_64-LiveDVD
3、jdk-7u25-linux-x64.rpm
4、hadoop-1.1.2.tar.gz
?
部署节点:
一主一从
master节点:hadoopmaster:192.168.99.201
slave ?节点:hadoopslaver:192.168.99.202
?
安装步骤:
一、在vmware上新建两个虚拟机,分别为:HadoopMaster 和 HadoopSlaver,在其上面都安装上CentOS-6.4-x86_64系统。
二、修改主机名:
1、登入HadoopMaster?虚拟机,进入命令行窗口,切换到root用户;
2、用vi编辑器打开/etc/sysconfig/network文件,里面有一行 HOSTNAME=localhost.localdomain (如果是默认的话),修改 localhost.localdomain 为你的主机名,修改为如下:
[root@hadoopmaster ~]# cat /etc/sysconfig/networkNETWORKING=yesNETWORKING_IPV6=noHOSTNAME=hadoopmaster[root@hadoopmaster ~]#
?3、用vi编辑器修改/etc/hosts文件,修改为:
[root@hadoopmaster ~]# cat /etc/hosts127.0.0.1localhost.localdomain localhost::1localhost6.localdomain6 localhost6192.168.99.201 hadoopmaster192.168.99.202 hadoopslaver[root@hadoopmaster ~]#
?4、将上面两个文件修改完后,并不能立刻生效。
? ?重启后查看主机名 uname -n 。
[root@hadoopmaster ~]# uname -nhadoopmaster[root@hadoopmaster ~]#
?5、相应的,进入HadoopSlaver虚拟机,vi?/etc/sysconfig/network,修改为:
?
[root@hadoopmaster ~]# cat /etc/sysconfig/networkNETWORKING=yesNETWORKING_IPV6=noHOSTNAME=hadoopslaver[root@hadoopmaster ~]#? ? 同时将/etc/hosts文件改为和第3点一样,重启即可。
?
?三、网络配置
1、由于需要当服务器使用,采用桥接的方式,桥接设置如下:
? ??虚拟机设置—>Network Adapter,选择桥接方式,截图如下:
??
?
?
2、进入系统,配置静态ip:
?
# vi /etc/sysconfig/network-scripts/ifcfg-eth0 TYPE=EthernetBOOTPROTO=staticIPADDR=192.168.99.201PREFIX=24GATEWAY=192.168.99.10DNS1=218.85.157.99DEFROUTE=yesIPV4_FAILURE_FATAL=yesIPV6INIT=noNAME=eth0UUID=8feb03de-5349-4273-9cd7-af47ad76e510ONBOOT=yesHWADDR=00:0C:29:CA:96:4ALAST_CONNECT=1373354523?3、Restart network service
?
# service network restart 或 # /etc/init.d/network restart
?
重启network过程中可能会出现如下错误:Error: Connection activation failed: Device not managed by NetworkManager原因是:系统中有两个服务在管理网络,所以需要停掉一个,步骤如下:1)Remove Network Manager from startup Services.# chkconfig NetworkManager off2)Add Default Net Manager# chkconfig network on3)Stop NetworkManager first# service NetworkManager stop4)and then start Default Manager# service network restart?
?
4、相应的,将hadoopslaver的ip配置成192.168.99.202。
5、在hadoopmaster上ping hadoopslaver,命令如下:
?#ping hadoopslaver
如果能ping通,说明ip配置成功。
6、如果ping不通,则需要关闭虚拟机防火墙:
? ? 关闭命令:service iptables stop
? ? 永久关闭防火墙:chkconfig iptables off
? 两个命令同时运行,运行完成后查看防火墙关闭状态:
?
[root@hadoopmaster ~]# service iptables statusiptables: Firewall is not running.[root@hadoopmaster ~]#四、Hadoop集群环境安装、配置(1)安装jdk:# cd /root/#cd .ssh/ (如果没有.ssh目录则创建一个:mkdir .ssh)1)生成密钥:[root@hadoopmaster .ssh]# ssh-keygen -t rsaGenerating public/private rsa key pair.Enter file in which to save the key (/root/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa.Your public key has been saved in /root/.ssh/id_rsa.pub.The key fingerprint is:ec:d5:cd:e8:91:e2:c3:f9:6f:33:9e:63:3a:3e:ac:42 root@hadoopmasterThe key's randomart image is:+--[ RSA 2048]----+| || || || . . = || S o = o || .E+ + . || .. =.. || . o+ *. || ..o+O++ |+-----------------+[root@hadoopmaster .ssh]# lltotal 12-rw-------. 1 root root 1675 Jul 10 16:16 id_rsa-rw-r--r--. 1 root root 399 Jul 10 16:16 id_rsa.pub2)将id_rsa.pub 拷贝到.ssh目录下,并重新命名为authorized_keys,便可以使用密钥方式登录。[root@hadoopmaster .ssh]# cp id_rsa.pub authorized_keys3)修改密钥权限:[root@hadoopmaster .ssh]# chmod go-rwx authorized_keys [root@hadoopmaster .ssh]# lltotal 16-rw-------. 1 root root 399 Jul 10 16:20 authorized_keys-rw-------. 1 root root 1675 Jul 10 16:16 id_rsa-rw-r--r--. 1 root root 399 Jul 10 16:16 id_rsa.pub4)测试:[root@hadoopmaster .ssh]# ssh myhadoopmThe authenticity of host 'myhadoopm (192.168.80.144)' can't be established.RSA key fingerprint is 2a:c0:f5:ea:6b:e6:11:8a:47:8a:de:8d:2e:d2:97:36.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added 'myhadoopm,192.168.80.144' (RSA) to the list of known hosts.这样即可无密码进行登录。5)远程拷贝密钥到slaver节点服务器:[root@hadoopmaster .ssh]# scp authorized_keys root@myhadoops:/root/.sshThe authenticity of host 'myhadoops (192.168.80.244)' can't be established.RSA key fingerprint is d9:63:3d:6b:16:99:f5:3c:67:fd:ed:86:96:3d:27:f7.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added 'myhadoops,192.168.80.244' (RSA) to the list of known hosts.root@myhadoops's password: authorized_keys 100% 399 0.4KB/s 00:00 6)测试master无密码登录slaver上:[root@hadoopmaster .ssh]# ssh hadoopslaver[root@hadoopslaver ~]# exitlogoutConnection to hadoopslaver closed.[root@hadoopmaster .ssh]#
?系统和组建的依赖关系:
?
3、修改core-site.xml:
<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!-- Put site-specific property overrides in this file. --><configuration><property> <name>fs.default.name</name> <value>hdfs://hadoopmaster:9000</value></property><property> <name>hadoop.tmp.dir</name> <value>/tmp/hadoop-root</value></property></configuration>
<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!-- Put site-specific property overrides in this file. --><configuration><property><name>dfs.name.dir</name><value>/opt/data/hadoop/hdfs/name,/opt/data1/hadoop/hdfs/name</value><!--HDFSnamenodeimage 文件保存地址--><description></description></property><property><name>dfs.data.dir</name><value>/opt/data/hadoop/hdfs/data,/opt/data1/hadoop/hdfs/data</value><!--HDFS数据文件存储路径,可以配置多个不同的分区和磁盘中,使用,号分隔--><description></description></property><property><name>dfs.http.address</name><value>hadoopmaster:50070</value><!---HDFSWeb查看主机和端口--></property><property><name>dfs.secondary.http.address</name><value>hadoopmaster:50090</value><!--辅控HDFSweb查看主机和端口--></property><property><name>dfs.replication</name><value>2</value><!--HDFS数据保存份数,通常是3--></property><property><name>dfs.datanode.du.reserved</name><value>1073741824</value><!--datanode写磁盘会预留1G空间给其他程序使用,而非写满,单位bytes-></property><property><name>dfs.block.size</name><value>134217728</value><!--HDFS数据块大小,当前设置为128M/Block--></property></configuration>
<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!-- Put site-specific property overrides in this file. --><configuration><property><name>mapred.job.tracker</name><value>hadoopmaster:9001</value><!--JobTrackerrpc主机和端口--></property><property><name>mapred.local.dir</name><value>/opt/data/hadoop/mapred/mrlocal</value><!--MapReduce产生的中间文件数据,按照磁盘可以配置成多个--><final>true</final></property><property><name>mapred.system.dir</name><value>/opt/data/hadoop/mapred/mrsystem</value><final>true</final><!--MapReduce的系统控制文件--></property><property><name>mapred.tasktracker.map.tasks.maximum</name><value>2</value><final>true</final><!--最大map槽位数量,默认是3个--></property><property><name>mapred.tasktracker.reduce.tasks.maximum</name><value>1</value><final>true</final><!--单台机器最大reduce槽位数量--></property><property><name>io.sort.mb</name><value>32</value><final>true</final><!--reduce排序使用内存大小,默认100M,要小于mapred.child.java.opts--></property><property><name>mapred.child.java.opts</name><value>-Xmx64M</value><!--map和reduce 进程JVM 最大内存配置机器总内存=系统+datanode+tasktracker+(map+reduce)16*?--></property><property><name>mapred.compress.map.output</name><value>true</value><!--map和reduce 输出中间文件默认开启压缩--></property></configuration>
[root@hadoopmaster bin]# ./hadoop namenode -format13/07/11 14:35:44 INFO namenode.NameNode: STARTUP_MSG: /************************************************************STARTUP_MSG: Starting NameNodeSTARTUP_MSG: host = hadoopmaster/127.0.0.1STARTUP_MSG: args = [-format]STARTUP_MSG: version = 1.1.2STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1 -r 1440782; compiled by 'hortonfo' on Thu Jan 31 02:03:24 UTC 2013************************************************************/13/07/11 14:35:44 INFO util.GSet: VM type = 64-bit13/07/11 14:35:44 INFO util.GSet: 2% max memory = 19.33375 MB13/07/11 14:35:44 INFO util.GSet: capacity = 2^21 = 2097152 entries13/07/11 14:35:44 INFO util.GSet: recommended=2097152, actual=209715213/07/11 14:35:45 INFO namenode.FSNamesystem: fsOwner=root13/07/11 14:35:45 INFO namenode.FSNamesystem: supergroup=supergroup13/07/11 14:35:45 INFO namenode.FSNamesystem: isPermissionEnabled=true13/07/11 14:35:45 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=10013/07/11 14:35:45 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)13/07/11 14:35:45 INFO namenode.NameNode: Caching file names occuring more than 10 times 13/07/11 14:35:46 INFO common.Storage: Image file of size 110 saved in 0 seconds.13/07/11 14:35:46 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/opt/data/hadoop/hdfs/name/current/edits13/07/11 14:35:46 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/opt/data/hadoop/hdfs/name/current/edits13/07/11 14:35:47 INFO common.Storage: Storage directory /opt/data/hadoop/hdfs/name has been successfully formatted.13/07/11 14:35:47 INFO common.Storage: Image file of size 110 saved in 0 seconds.13/07/11 14:35:47 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/opt/data1/hadoop/hdfs/name/current/edits13/07/11 14:35:47 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/opt/data1/hadoop/hdfs/name/current/edits13/07/11 14:35:47 INFO common.Storage: Storage directory /opt/data1/hadoop/hdfs/name has been successfully formatted.13/07/11 14:35:47 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************SHUTDOWN_MSG: Shutting down NameNode at hadoopmaster/127.0.0.1************************************************************/[root@hadoopmaster bin]#
[root@hadoopmaster bin]# ./start-all.sh starting namenode, logging to /opt/modules/hadoop/hadoop-1.1.2/libexec/../logs/hadoop-root-namenode-hadoopmaster.outhadoopmaster: starting datanode, logging to /opt/modules/hadoop/hadoop-1.1.2/libexec/../logs/hadoop-root-datanode-hadoopmaster.outhadoopslaver: starting datanode, logging to /opt/modules/hadoop/hadoop-1.1.2/libexec/../logs/hadoop-root-datanode-hadoopslaver.outhadoopmaster: starting secondarynamenode, logging to /opt/modules/hadoop/hadoop-1.1.2/libexec/../logs/hadoop-root-secondarynamenode-hadoopmaster.outstarting jobtracker, logging to /opt/modules/hadoop/hadoop-1.1.2/libexec/../logs/hadoop-root-jobtracker-hadoopmaster.outhadoopslaver: starting tasktracker, logging to /opt/modules/hadoop/hadoop-1.1.2/libexec/../logs/hadoop-root-tasktracker-hadoopslaver.outhadoopmaster: starting tasktracker, logging to /opt/modules/hadoop/hadoop-1.1.2/libexec/../logs/hadoop-root-tasktracker-hadoopmaster.out[root@hadoopmaster bin]# jps3303 DataNode3200 NameNode3629 TaskTracker3512 JobTracker3835 Jps3413 SecondaryNameNode[root@hadoopmaster bin]#
[root@hadoopslaver ~]# jps3371 Jps3146 DataNode3211 TaskTracker[root@hadoopslaver ~]#
?
[root@hadoopmaster bin]# ./hadoop jar /opt/modules/hadoop/hadoop-1.1.2/hadoop-examples-1.1.2.jar pi 20 50Number of Maps = 20Samples per Map = 50java.io.IOException: Tmp directory hdfs://myhadoopm:9000/user/root/PiEstimator_TMP_3_141592654 already exists. Please remove it first.at org.apache.hadoop.examples.PiEstimator.estimate(PiEstimator.java:270)at org.apache.hadoop.examples.PiEstimator.run(PiEstimator.java:342)at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)at org.apache.hadoop.examples.PiEstimator.main(PiEstimator.java:351)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:606)at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:606)at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
[root@hadoopmaster bin]# ./hadoop fs -rmr hdfs://myhadoopm:9000/user/root/PiEstimator_TMP_3_141592654Deleted hdfs://myhadoopm:9000/user/root/PiEstimator_TMP_3_141592654[root@hadoopmaster bin]#
?
?