Solved HDFS DataNode Error when submit Heron Topologies

问题描述

当向部署在Aurora+Mesos+ZooKeeper+HDFS的Heron集群提交Topology时,出现如下错误信息:

18/02/18 07:16:09 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /home/yitian/heron/topologies/aurora/WordCountTopology-yitian-tag-0--590937850643635237.tar.gz._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
     at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1628)
     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3121)
     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3045)
     at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:725)
     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:493)
     at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
     at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
     at java.security.AccessController.doPrivileged(Native Method)
     at javax.security.auth.Subject.doAs(Subject.java:422)
     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2213)
    at org.apache.hadoop.ipc.Client.call(Client.java:1476)
     at org.apache.hadoop.ipc.Client.call(Client.java:1413)
     at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
     at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418)
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
     at java.lang.reflect.Method.invoke(Method.java:498)
     at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
     at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
     at com.sun.proxy.$Proxy11.addBlock(Unknown Source)
     at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1588)
     at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1373)
     at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:554)
copyFromLocal: File /home/yitian/heron/topologies/aurora/WordCountTopology-yitian-tag-0--590937850643635237.tar.gz._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
[2018-02-18 07:16:10 -0800] [INFO] com.twitter.heron.statemgr.zookeeper.curator.CuratorStateManager: Closing the CuratorClient to: heron01:2181 
[2018-02-18 07:16:10 -0800] [INFO] com.twitter.heron.statemgr.zookeeper.curator.CuratorStateManager: Closing the tunnel processes 
[2018-02-18 07:16:10 +0000] [ERROR]: Failed to upload the topology package at '/tmp/tmpcJkdON/topology.tar.gz' to: '/home/yitian/heron/topologies/aurora/WordCountTopology-yitian-tag-0--590937850643635237.tar.gz'
[2018-02-18 07:16:10 +0000] [ERROR]: Failed to launch topology 'WordCountTopology'

查看heron02(slave)的hadoop信息如下:

[DISK]file:/home/yitian/hadoop/hadoop-2.7.4/tmp/dfs/data/
java.io.IOException: Incompatible clusterIDs in /home/yitian/hadoop/hadoop-2.7.4/tmp/dfs/data: namenode clusterID = CID-f04bfe7d-c4b6-4ae5-9a08-cf8d55692d7a; datanode clusterID = CID-635be1f6-eabf-4245-ba0e-1bccc1f45b11
     at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:777)
     at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:300)
     at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:416)
     at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:395)
     at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:573)
     at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1386)
     at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1351)
     at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:313)
     at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:216)
     at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:637)
     at java.lang.Thread.run(Thread.java:748)
2018-02-18 07:25:36,446 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to heron01/192.168.201.131:9000. Exiting.
java.io.IOException: All specified directories are failed to load.
     at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:574)
     at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1386)
     at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1351)
     at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:313)
     at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:216)
     at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:637)
     at java.lang.Thread.run(Thread.java:748)
2018-02-18 07:25:36,447 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to heron01/192.168.201.131:9000
2018-02-18 07:25:36,550 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned)
2018-02-18 07:25:38,551 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2018-02-18 07:25:38,554 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0

解决方法

restart ./start-dfs.sh after deleted the tmp/ of hadoop in heron02