March 03, 2016

HDFS - How to recover corrupted HDFS metadata in Hadoop 1.2.X?

You might have Hadoop in your production. And sometimes Tera-bytes of data is residing in Hadoop. HDFS metadata can get corrupted. Namenode won't start in such cases. When you check Namenode logs you might see exceptions.

ERROR org.apache.hadoop.dfs.NameNode:
    at org.apache.hadoop.dfs.FSEditLog.loadFSEdits(
    at org.apache.hadoop.dfs.FSImage.loadFSEdits(
    at org.apache.hadoop.dfs.FSImage.loadFSImage(
    at org.apache.hadoop.dfs.FSImage.recoverTransitionRead(
    at org.apache.hadoop.dfs.FSDirectory.loadFSImage(
    at org.apache.hadoop.dfs.FSNamesystem.initialize(
    at org.apache.hadoop.dfs.FSNamesystem.<init>(
    at org.apache.hadoop.dfs.NameNode.initialize(
    at org.apache.hadoop.dfs.NameNode.<init>(
    at org.apache.hadoop.dfs.NameNode.<init>(
    at org.apache.hadoop.dfs.NameNode.createNameNode(
    at org.apache.hadoop.dfs.NameNode.main(

If you have a development environment, you can always format the HDFS and continue. This blog posts even suggest that -


So Hadoop Administrators can't format HDFS simply. But you can recover your HDFS to last checkpoint. You might loose some data files. But more than 90% of the data might be safe. Let's see how to recover corrupted HDFS metadata.

Hadoop is creating checkpoints periodically in Namenode folder. You might see three folders in Namenode directory. They are
  1. current
  2. image
  3. previous.checkpoint

current folder must be corrupted most probably.
  • Stop all the Hadoop services from all the nodes.
  • Backup both "current" and "previous.checkpoint" directories. 
  • Delete "current" directory. 
  • Rename "previous.checkpoint" to "current"
  • Restart Hadoop services. 

Steps I followed I have mentioned above. Below commands were ran to recover the HDFS. Commands might slightly change depending on your installation.

cd <namenode.dir>
cp -r current current.old
cp -r previous.checkpoint previous.checkpoint.old
mv previous.checkpoint current

That's all!!!! Everything was okay after that!


  1. The blog gave me idea to recover corrupted HDFS metadata in Hadoop my sincere thanks for sharing this post and please continue to share this post
    Hadoop Training in Chennai

  2. HDFS metadata backups can be used to restore a NameNode when both NameNode roles have failed. In addition, Cloudera recommends backing up HDFS metadata before a major upgrade..If want become a to learn for Java Training.We have to real-time training and 100% job assistance and it's live instructor trained for real-time scenario and they explain about the all latest version update for Java Training Course, to reach us
    Java Training in Chennai | Java Training Placement in Chennai

  3. really nice blog has been shared by you. before i read this blog i didn't have any knowledge about this but now i got some knowledge. so keep on sharing such kind of an interesting blogs.
    hadoop training in chennai

  4. Good and very much informative blog post.. thanks for sharing your information and views.. keep rocks and updating...

    Dot Net Training in chennai | Hadoop Training in chennai

  5. Good and Nice article... it is very useful for me to learn, i really learnt through your article... thanks for sharing your information and views...
    Mobile Computing Project Center in Chennai | Mobile Computing Project Center in Velachery