安装Hadoop和HBase
我使用的是ubuntu10.04.3版本,Hadoop与HBase使用的是Cloudera公司的hadoop0.20.2-cdh3u1与hbase0.90.3-cdh3u1版本。
1.集群规划:
使用3个虚拟机来构建集群,以后可以考虑做增加节点的实验。
机器名,IP分别为:
myCloud01,10.63.0.121, hadoop namenode,datanode / hbase HMaster
myCloud02,10.63.0.122, hadoop datanode / hbase HRegionServer
myCloud03,10.6.30.123, hadoop daganode / hbase HRegionServer
myCloud01做为master,slave与JobTracker,myCloud02与myCloud03做为slave与TaskTracker。
查看机器名字:$hostname
修改ubuntun机器的hostname,直接修改/etc /hostname文件
2.安装Hadoop与HBase之前的准备:
1) 创建非root用户hadoop
由于Cloudera公司的Hadoop只能在非root用户下启动,因此,我们建立一个非root用户hadoop,密码也为hadoop。
2)安装jdk
由于Hadoop要使用到jdk,因此在安装Hadoop之前必须安装jdk。我下载的是jdk-6u16-dlj-linux-i586.bin。首先赋予该文件执行权限:
$chmod a+x jdk-6u16-dlj-linux-i586.bin
然后安装jdk:
$./jdk-6u16-dlj-linux-i586.bin
3)解压hadoop0.20.2-cdh3u1,hbase0.90.3-cdh3u1
在myCloud01,myCloud02,myCloud03上创建cdh3目录
$mkdir /home/hadoop/cdh3
在myCloud01上解压hadoop0.20.2-cdh3u1,hbase0.90.3-cdh3u1,zookeeper-3.3.3-cdh3u1
$tar zxvf hadoop0.20.2-cdh3u1.tar.gz -C /home/hadoop/cdh3
$tar zxvf hbase0.90.3-cdh3u1.tar.gz -C /home/hadoop/cdh3
$tar zxvf zookeeper-3.3.3-cdh3u1.tar.gz -C /home/hadoop/cdh3
在myCloud01上修改/etc/profile
$sudo vim /etc/profile
添加代码
JAVA_HOME=/home/hadoop/cdh3/jdk1.6.0_16JRE_HOME=$JAVA_HOME/jreHADOOP_HOME=/home/hadoop/cdh3/hadoop-0.20.2-cdh3u1HBASE_HOME=/home/hadoop/cdh3/hbase-0.90.3-cdh3u1ZOOKEEPER_HOME=/home/hadoop/cdh3/zookeeper-3.3.3-cdh3u1CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar$CLASSPATHPATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HBASE_HOME/bin:$ZOOKEEPER_HOME/bin:$ZOOKEEPER_HOME/conf:$PATHexport JAVA_HOME JRE_HOME CLASSPATH HADOOP_HOME HBASE_HOME ZOOKEEPER_HOME PATH
JAVA_HOME=/home/hadoop/cdh3/jdk1.6.0_16JRE_HOME=$JAVA_HOME/jreHADOOP_HOME=/home/hadoop/cdh3/hadoop-0.20.2-cdh3u1HBASE_HOME=/home/hadoop/cdh3/hbase-0.90.3-cdh3u1CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar$CLASSPATHPATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HBASE_HOME/bin:$PATHexport JAVA_HOME JRE_HOME CLASSPATH HADOOP_HOME HBASE_HOME PATH
127.0.0.1 loaclhost10.63.0.121 myCloud0110.63.0.122 myCloud0210.63.0.123 myCloud03
export JAVA_HOME=/home/hadoop/cdh3/jdk1.6.0_16
<property><name>fs.default.name</name><value>hdfs://myCloud01:9000</value></property><property><name>hadoop.tmp.dir</name><value>/data/tmp</value></property>
<property><name>dfs.replication</name><value>1</value></property><property><name>dfs.permissions</name><value>false</value></property><property><name>dfs.name.dir</name><value>/data/name</value></property><property><name>dfs.data.dir</name><value>/data/data</value></property>
<property><name>mapred.job.tracker</name><value>myCloud01:9001</value></property>
myCloud01
myCloud01myCloud02myCloud03
export JAVA_HOME=/home/hadoop/cdh3/jdk1.6.0_16export HBASE_CLASSPATH=/home/hadoop/cdh3/hbase-0.90.3/conf
export HBASE_HOME=/home/hadoop/cdh3/hbase-0.90.3-cdh3u1export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$HBASE_HOME/hbase-0.90.3-cdh3u1.jar:$HBASE/hbase-0.90.3-cdh3u1-tests.jar:$HBASE_HOME/lib/zookeeper-3.3.3-cdh3u1.jar
<property><name>hbase.rootdir</name><value>hdfs://myCloud01:9000/hbae</value></property><property><name>hbase.cluster.distributed</name><value>true</value></property><property><name>hbase.master.port</name><value>6000</value></property><property><name>hbase.zookeeper.quorum</name><value>myCloud01</value></property>
myCloud01myCloud02myCloud03