`
iceflyingfox
  • 浏览: 51148 次
  • 性别: Icon_minigender_1
  • 来自: 北京
社区版块
存档分类
最新评论

hadoop学习日记一 环境搭建

阅读更多

环境搭建

 

 

操作系统  Ubuntu-Server 11.10 64bit
JDK        1.6.0_31

机器名           IP                            作用
wenbo00     192.168.182.130     NameNode, master, jobTracker
wenbo01     192.168.182.132     DataNode, slave , taskTracker
wenbo02     192.168.182.133     DataNode, slave , taskTracker

 不要安装图形界面以节约内存,本人机器为Y470,使用VMware Player同时启动三个server,仅消耗60%的内存。

另外,保持每个server环境的一致,可以配好一台机器后使用VMWare Player进行镜像复制

 

 

在三台机器的/etc/hosts文件里添加下面内容

 

 

192.168.182.130 wenbo00
192.168.182.132 wenbo01
192.168.182.133 wenbo02

 配置无密码登陆到NameNode

1.在NameNode上生成授权文件ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa

2.将生成的id_dsa.pub写入授权文件 cat .ssh/id_dsa.pub >> .ssh/authorized_keys

3.将生成的id_dsa.pub文件copy到DataNode上,重复第二个步骤 scp .ss/id_dsa.pub root@192.168.182.132:/home/ssh

 

 

每台机器上安装hadoop并配置环境变量,向etc/profile加入

export HADOOP_HOME=/home/hadoop-1.0.1

export PATH=$HADOOP_HOME/bin:$PATH

 

 

配置每台机器hadoop的jdk,在HADOOP_HOME/conf/hadoop-env.sh中加入

export JAVA_HOME=/home/java/jdk1.6.0_31

 

在NameNode上修改/home/hadoop-1.0.1/conf/masters 和 /home/hadoop-1.0.1/conf/slaves文件

 

 

masters: 192.168.182.130
slaves: 192.168.182.132
            192.168.182.133
 

 

修改/home/hadoop-1.0.1/conf/下的配置文件

core-site.xml

 

 

<configuration>
<property>
   <name>hadoop.tmp.dir</name>
   <value>/home/hadoop/tmp</value>
</property>
<property>
   <name>fs.default.name</name>
   <value>hdfs://wenbo00:9000</value>
</property>
</configuration>
 

hdfs-site.xml

 

 

<configuration>
<property>
    <name>dfs.replication</name>
    <value>1</value>
</property>
</configuration>
 

 

mapred-site.xml

 

 

<configuration>
<property>
   <name>mapred.job.tracker</name>
   <value>wenbo00:9001</value>
</property>
</configuration>

 三台机器上配置相同

 

配置完成后

 

 

在NameNode上启动hadoop start-all.sh

在NameNode上执行jps命令,显示如下

2713 NameNode

2971 JobTracker

3102 Jps

2875 SecondaryNameNode

 

在DataNode上执行jps命令,显示如下

2184 TaskTracker

2256 Jps

2076 DataNode

 

在NameNode下查看集群信息 hadoop dfsadmin -report,显示如下

 

 

Configured Capacity: 40159797248 (37.4 GB)
Present Capacity: 34723860480 (32.34 GB)
DFS Remaining: 34723794944 (32.34 GB)
DFS Used: 65536 (64 KB)
DFS Used%: 0%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 2 (2 total, 0 dead)

Name: 192.168.182.132:50010
Decommission Status : Normal
Configured Capacity: 20079898624 (18.7 GB)
DFS Used: 28687 (28.01 KB)
Non DFS Used: 2717982705 (2.53 GB)
DFS Remaining: 17361887232(16.17 GB)
DFS Used%: 0%
DFS Remaining%: 86.46%
Last contact: Tue Mar 13 03:10:29 PDT 2012


Name: 192.168.182.133:50010
Decommission Status : Normal
Configured Capacity: 20079898624 (18.7 GB)
DFS Used: 36849 (35.99 KB)
Non DFS Used: 2717954063 (2.53 GB)
DFS Remaining: 17361907712(16.17 GB)
DFS Used%: 0%
DFS Remaining%: 86.46%
Last contact: Tue Mar 13 03:10:29 PDT 2012
分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics