600字范文,内容丰富有趣,生活中的好帮手!
600字范文 > hadoop测试环境完全分布式安装配置

hadoop测试环境完全分布式安装配置

时间:2023-03-18 02:09:34

相关推荐

hadoop测试环境完全分布式安装配置

安装准备若干linux服务器,一个namenode,n个datanode配置网络

配置三个节点hosts

vi /etc/hosts

192.168.61.128centos01

192.168.61.129 centos02

192.168.61.130 centos03

配置网卡

centos01

vi /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0

HWADDR=00:0C:29:14:FC:37

TYPE=Ethernet

UUID=409f6562-d469-4dd4-b89e-9d30a04c3537

ONBOOT=yes

NM_CONTROLLED=yes

BOOTPROTO=static

BROADCAST=192.168.61.255

IPADDR=192.168.61.128

GATEWAY=192.168.61.2

NETMASK=255.255.255.0

重启网络服务

service network restart

设置dns

vi /etc/resolv.conf

naeserver xxx.xxx.xxx.xxx

设置hostname

vi /etc/sysconfig/network

NETWORKING=yes

HOSTNAME=centos01

重启服务器

init 6

centos02

克隆master节点后修改

vi /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0

TYPE=Ethernet

ONBOOT=yes

NM_CONTROLLED=yes

BOOTPROTO=static

BROADCAST=192.168.61.255

IPADDR=192.168.61.129

GATEWAY=192.168.61.2

NETMASK=255.255.255.0

重启网络服务

service network restart

删除net.rules

rm -rf /etc/udev/rules.d/70-persistent-net.rules

设置hostname

vi /etc/sysconfig/network

NETWORKING=yes

HOSTNAME=centos02

重启服务器

init 6

centos03与centos02进行相同操作

IP 192.168.61.130

hostname=centos03

设置免密登录

参考ssh免密登录文档,并关闭防火墙等

安装配置jdk

rpm -ivh jdk-7u79-linux-x64.rpm

hadoop解压及配置环境变量

tar -zxvf hadoop-2.7.0_x64.tar.gz

vi ~/.bash_profile

export JAVA_HOME=/usr/java/jdk1.7.0_79

export HADOOP_HOME=/opt/software/hadoop-2.7.0

export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin

source ~/.bash_profile

vi hadoop-2.7.0/etc/hadoop/hadoop-env.sh

vi hadoop-2.7.0/etc/hadoop/mapred-env.sh

vi hadoop-2.7.0/etc/hadoop/yarn-env.sh

分别添加以下环境变量:

export JAVA_HOME=/usr/java/jdk1.7.0_79

hadoop完全分布式安装

centos01

cd /opt/software/hadoop-2.5.1/etc/hadoop/

vi core-site.xml

<configuration>

<property>

<name>fs.defaultFS</name>

<value>hdfs://centos01:9000</value>

</property>

<property>

<name>hadoop.tmp.dir</name>

<value>/opt/software/hadoop-2.7.0</value>

</property>

</configuration>

vi hdfs-site.xml

<configuration>

<property>

<name>dfs.replication</name>

<value>2</value>

</property>

<property>

<name>dfs.permissions.enabled</name>

<value>false</value>

</property>

<property>

<name>dfs.namenode.name.dir</name>

<value>file:/opt/software/hadoop-2.7.0/dfs/name</value>

</property>

<property>

<name>dfs.datanode.name.dir</name>

<value>file:/opt/software/hadoop-2.7.0/dfs/data</value>

</property>

</configuration>

mv mapred-site.xml.template mapred-site.xml

vi mapred-site.xml

<configuration>

<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

</property>

</configuration>

vi yarn-site.xml

<configuration>

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

<property>

<name>yarn.resourcemanager.address</name>

<value>centos01:8032</value>

</property>

</configuration>

touch masters

touch slaves

vi masters

master

:wq

vi slaves

master

slave

:wq

完全分布式环境中master和slave节点上的文件需要一致,因此复制文件到slave中,包括1. hadoop文件夹

scp -r /opt/software/hadoop-2.7.0 root@centos02:/opt/software/

scp -r /opt/software/hadoop-2.7.0 root@centos03:/opt/software/

系统配置文件,包括环境变量

scp .bash_profile root@centos02:~

scp .bash_profile root@centos03:~

hosts文件

scp /etc/hosts root@centos02:/etc/hosts

scp /etc/hosts root@centos03:/etc/hosts

格式化HDFS

cd /opt/software/hadoop-2.7.0

hdfs namenode -format

yes

启动HDFS集群和yarn集群

sbin/start-all.sh

三个节点jps命令查看进程

在浏览器访问进行测试

http://192.168.61.128:50070/

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。