首页 诗词 字典 板报 句子 名言 友答 励志 学校 网站地图
当前位置: 首页 > 教程频道 > 软件管理 > 软件架构设计 >

Fedora8配备Hadoop0.22

2013-02-19 
Fedora8配置Hadoop0.22大致用Fedora8配置了一次hadoop,模拟配置的,跑了三台vmware虚拟机,都是fedora8,http

Fedora8配置Hadoop0.22

大致用Fedora8配置了一次hadoop,模拟配置的,跑了三台vmware虚拟机,都是fedora8,

http://download.csdn.net/detail/fzxy002763/5064976

第一步:jdk,我的java是在装fedora8的时候就包含安装的,应为在后面配置时候需要知道其路径,所以我们这里需要查看下默认Java安装的路径,我这的默认路径是/usr/lib/jvm/java-1.7.0-icedtea-1.7.0.0

第二步:修改/etc/hosts,vi /etc/hosts就行了,以下是master的配置

slave1slave2
然后修改 vi /home/hadoop/hadoop-0.22.0/conf/hadoop-env.sh,添加java_home,如下我添加的是默认的java

<?xml version="1.0"?>  <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>  <configuration>          <property>          <name>fs.default.name</name>          <value>hdfs://master:9000/</value>          <description></description>      </property>      <property>              <name>dfs.replication</name>              <value>3</value>      </property>          <property>          <name>fs.inmemory.size.mb</name>          <value>10</value>          <description>Larger amount of memory allocated for the in-memory file-system used to merge map-outputs at the reduces.</description>      </property>          <property>          <name>io.sort.factor</name>          <value>10</value>          <description>More streams merged at once while sorting files.</description>      </property>          <property>          <name>io.sort.mb</name>          <value>10</value>          <description>Higher memory-limit while sorting data.</description>      </property>          <property>          <name>io.file.buffer.size</name>          <value>131072</value>          <description>Size of read/write buffer used in SequenceFiles.</description>      </property>          <property>          <name>hadoop.tmp.dir</name>          <value>/home/hadoop/storage/tmp/hadoop-${user.name}</value>          <description></description>      </property>  </configuration>  
然后修改conf/hdfs-site.xml,如下

<?xml version="1.0"?>  <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>  <configuration>          <property>          <name>dfs.name.dir</name>          <value>/home/hadoop/storage/name/a,/home/hadoop/storage/name/b</value>          <description>Path on the local filesystem where the NameNode stores the namespace and transactions logs persistently.</description>      </property>          <property>          <name>dfs.data.dir</name>          <value>/home/hadoop/storage/data/a,/home/hadoop/storage/data/b,/home/hadoop/storage/data/c</value>          <description>Comma separated list of paths on the local filesystem of a DataNode where it should store its blocks.</description>      </property>          <property>          <name>dfs.block.size</name>          <value>67108864</value>          <description>HDFS blocksize of 64MB for large file-systems.</description>      </property>          <property>          <name>dfs.namenode.handler.count</name>          <value>10</value>          <description>More NameNode server threads to handle RPCs from large number of DataNodes.</description>      </property>  </configuration>  
接着修改conf/mapred-site.xml
<?xml version="1.0"?>  <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>  <configuration>          <property>          <name>mapred.job.tracker</name>          <value>hdfs://master:19830/</value>          <description>Host or IP and port of JobTracker.</description>      </property>          <property>          <name>mapred.system.dir</name>          <value>/home/hadoop/storage/mapred/system</value>          <description>Path on the HDFS where where the MapReduce framework stores system files.Note: This is in the default filesystem (HDFS) and must be accessible from both the server and client machines.</description>      </property>      <property>          <name>mapred.local.dir</name>          <value>/home/hadoop/storage/mapred/local</value>          <description>Comma-separated list of paths on the local filesystem where temporary MapReduce data is written. Note: Multiple paths help spread disk i/o.</description>      </property>          <property>          <name>mapred.tasktracker.map.tasks.maximum</name>          <value>10</value>          <description>The maximum number of Map tasks, which are run simultaneously on a given TaskTracker, individually.Note: Defaults to 2 maps, but vary it depending on your hardware.</description>      </property>          <property>          <name>mapred.tasktracker.reduce.tasks.maximum</name>          <value>2</value>          <description>The maximum number of Reduce tasks, which are run simultaneously on a given TaskTracker, individually. Note: Defaults to 2 reduces, but vary it depending on your hardware.</description>      </property>          <property>          <name>mapred.reduce.parallel.copies</name>          <value>5</value>          <description>Higher number of parallel copies run by reduces to fetch outputs from very large number of maps.</description>      </property>          <property>          <name>mapred.map.child.java.opts</name>          <value>-Xmx128M</value>          <description>Larger heap-size for child jvms of maps.</description>      </property>          <property>          <name>mapred.reduce.child.java.opts</name>          <value>-Xms64M</value>          <description>Larger heap-size for child jvms of reduces.</description>      </property>          <property>          <name>tasktracker.http.threads</name>          <value>5</value>          <description>More worker threads for the TaskTracker's http server. The http server is used by reduces to fetch intermediate map-outputs.</description>      </property>          <property>          <name>mapred.queue.names</name>          <value>default</value>          <description>Comma separated list of queues to which jobs can be submitted. Note: The MapReduce system always supports atleast one queue with the name as default. Hence, this parameter's value should always contain the string default. Some job schedulers supported in Hadoop, like the Capacity Scheduler(http://hadoop.apache.org/common/docs/stable/capacity_scheduler.html), support multiple queues. If such a scheduler is being used, the list of configured queue names must be specified here. Once queues are defined, users can submit jobs to a queue using the property name mapred.job.queue.name in the job configuration. There could be a separate configuration file for configuring properties of these queues that is managed by the scheduler. Refer to the documentation of the scheduler for information on the same.</description>      </property>          <property>          <name>mapred.acls.enabled</name>          <value>false</value>          <description>Boolean, specifying whether checks for queue ACLs and job ACLs are to be done for authorizing users for doing queue operations and job operations. Note: If true, queue ACLs are checked while submitting and administering jobs and job ACLs are checked for authorizing view and modification of jobs. Queue ACLs are specified using the configuration parameters of the form mapred.queue.queue-name.acl-name, defined below under mapred-queue-acls.xml. Job ACLs are described at Job Authorization(http://hadoop.apache.org/common/docs/stable/mapred_tutorial.html#Job+Authorization).</description>      </property>  </configuration>  
然后进行hadoop的分发,如下

mkdir /home/hadoop/storage

这里大部分主要是参考这篇文章配置的http://blog.csdn.net/shirdrn/article/details/7166513

热点排行