hbase学习记录hbase学习记录参考http://abloz.com/hbase/book.html#d613e75Step 1:修改conf/hbase-site.xm
hbase学习记录
hbase学习记录
参考http://abloz.com/hbase/book.html#d613e75
Step 1:修改conf/hbase-site.xml
(单机版)
ssh-keygen -t rsacat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keysscp -P22 authorized_keys Qmadou-test2:~/.ssh/scp -P22 authorized_keys Qmadou-test3:~/.ssh/scp -P22 authorized_keys Qmadou-test4:~/.ssh/
ssh -v -p 22 Qmadou-test4
http://192.168.3.190:60010看到主界面
http://192.168.3.190:50070/dfshealth.jsp
http://www.cnblogs.com/ventlam/archive/2011/01/22/HBaseCluster.html
hbase.regionserver.handler.count 30
hfile.block.cache.size 0.5
<property>
<name>hbase.regionserver.handler.count</name>
<value>30</value>
</property>
<property>
<name>hfile.block.cache.size</name>
<value>0.5</value>
</property>
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://Qmadou-test1:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.master</name>
<value>192.168.3.190:60000</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>192.168.3.191,192.168.3.192,192.168.3.193</value>
</property>
</configuration>
cp $HADOOP_HOME/hadoop-core-1.0.4.jar .
conf/hdfs-site.xml里面的xceivers参数,至少要有4096:
<property>
<name>dfs.datanode.max.xcievers</name>
<value>4096</value>
</property>
conf/hbase-env.sh
运行ntp
service ntpd start/stop/restart
chkconfig ntpd on
查看ntp的端口,应该看到123端口
netstat -unlnp
查看ntp服务器有无和上层连通
[root@S5 ~]# ntpstat
复制两个jar包,hadoop-core.jar和
cp /usr/local/product/hadoop-1.0.4/lib/commons-configuration-1.6.jar /usr/local/product/hbase-0.90.5/lib
<property>
<name>zookeeper.session.timeout</name>
<value>180000</value>
</property>
<property>
<name>hbase.regionserver.restart.on.zk.expire</name>
<value>true</value>
</property>
http://leongfans.iteye.com/blog/1071584
执行
./hbase org.jruby.Main add_table.rb /hbase/TableName
bin/hbase org.jruby.Main add_table.rb /hbase/uid_word_t5
重启HBase以后,发现加载的Region数量已经和实际的Region数量一致了