hadoop-root-datanode-macmini.log中:
2015-03-12 23:52:33,671 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000. Exiting. java.io.IOException: Incompatible clusterIDs in /hdfs/name/dfs/data: namenode clusterID = CID-70d64aad-1dfe-4f87-af15-d53ff80db3dd; datanode clusterID = CID-388a9ec6-cb87-4b0d-97c4-3b4d5c787b76 原因是:
namenode与datanode的clusterID在重新格式化namenode以后已经不再匹配,datanode无法启动。
另外:
此错误会导致在hive导入数据时发生如下错误(由于metadata不存在hdfs中,故create table并无报错):
hive> load data local inpath '/root/dbfile' overwrite into table employees PARTITION (country='US', state='IL');
Loading data to table default.employees partition (country=US, state=IL) Failed with exception Unable to move source file:/root/dbfile to destination hdfs://localhost:9000/user/hive/warehouse/employees/country=US/state=IL/dbfile FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTas 解决方法:
将hdfs存储数据的所在目录删掉,重新格式化hdfs(相关参数:dfs.name.dir dfs.data.dir):(具体目录看hadfs-site.xml文件)
hadoop namenode -format
|
|