4.1 编译hadoop-2.4.1-src.tar.gz源包
对于64位操作系统,需要重新编译源码包。
Hadoop下载地址:http://mirrors./apache/hadoop/common/
4.2 解压安装包hadoop-2.4.1.tar.gz
hadoop@master:/home/duanwf/Installpackage$ sudo tar zxvf hadoop-2.4.1.tar.gz -C /opt/
4.3 Hadoop环境变量配置
修改/etc/profile文件,加入以下内容:
hadoop@master:~$ sudo vi /etc/profile export HADOOP_DEV_HOME=/home/hadoop/hadoop-2.4.1/ export HADOOP_MAPARED_HOME=${HADOOP_DEV_HOME} export HADOOP_COMMON_HOME=${HADOOP_DEV_HOME} export HADOOP_HDFS_HOME=${HADOOP_DEV_HOME} export YARN_HOME=${HADOOP_DEV_HOME} export HADOOP_CONF_DIR=${HADOOP_DEV_HOME}/etc/hadoop export PATH=$HADOOP_DEV_HOME/bin:$HADOOP_DEV_HOME/sbin:$PATH
使修改的配置生效,在终端输入命令:
hadoop@master:~$ source /etc/profile
查看Hadoop环境变量是否生效,在终端执行命令: hadoop@master:~$ hadoop Usage: hadoop [--config confdir] COMMAND where COMMAND is one of: fs run a generic filesystem user client version print the version jar <jar> run a jar file checknative [-a|-h] check native hadoop and compression libraries availability distcp <srcurl> <desturl> copy file or directories recursively archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive classpath prints the class path needed to get the Hadoop jar and the required libraries daemonlog get/set the log level for each daemon or CLASSNAME run the class named CLASSNAME Most commands print help when invoked w/o parameters.
4.4 hadoop配置
配置之前,需要在master本地文件系统创建以下文件夹: ~/dfs/name ~/dfs/data ~/temphadoop@master:~$ mkdir ~/dfs hadoop@master:~$ mkdir ~/temp hadoop@master:~$ mkdir ~/dfs/name hadoop@master:~$ mkdir ~/dfs/data
这里要涉及到的配置文件有7个: ~/hadoop-2.4.1/etc/hadoop/hadoop-env.sh ~/hadoop-2.4.1/etc/hadoop/yarn-env.sh ~/hadoop-2.4.1/etc/hadoop/slaves ~/hadoop-2.4.1/etc/hadoop/core-site.xml ~/hadoop-2.4.1/etc/hadoop/hdfs-site.xml ~/hadoop-2.4.1/etc/hadoop/yarn-site.xml ![]()
4.5 复制到其他节点
进入slave1:
hadoop@slave1:~$ scp -r hadoop@master:/home/hadoop/hadoop-2.4.1/ /home/hadoop/ 进入slave2: hadoop@slave2:~$ scp -r hadoop@master:/home/hadoop/hadoop-2.4.1/ /home/hadoop/
4.6 Hadoop启动
(1)格式化HDFS
![]()
(2)启动HDFS
执行一下命令启动HDFS,会自动启动所有master的namenode和slave1,slave2的datanode:
![]()
【出现问题】
mkdir: 无法创建目录"/home/hadoop/hadoop-2.4.1/logs": 权限不够
【解决办法】
在master上都执行命令:
hadoop@master:~$ sudo chown -R hadoop:hadoop hadoop-2.4.1/ slave1和slave2同样需要执行。
重新启动HDFS
![]()
检查Hadoop集群是否安装好了,在master上面运行jps,如果有NameNode这个进程,说明master安装好了:
hadoop@master:~/hadoop-2.4.1$ jps 31711 SecondaryNameNode 31464 NameNode 31857 Jps
在slave1上面运行jps,如果有DataNode这个进程,说明slave1安装好了。
hadoop@slave1:~$ jps 5529 DataNode 5610 Jps
在slave2上面运行jps,如果有DataNode这个进程,说明slave1安装好了。
hadoop@slave2:~$ jps 8119 Jps 8035 DataNode
|
|
来自: 春和秋荣 > 《Haddop安装》