|Pivotal HDP||2.3,2.4, 2.5|
In some rare cases, files can be stuck in the OPENFORWRITE state in HDFS. If this happens, the data needs to be moved to a new inode to clear up the OPENFORWRITE status. This article explains how to do that.
1. Stop all applications writing to HDFS.
2. Locate the files stuck in OPENFORWRITE:
hdfs fsck -files -blocks -locations -openforwrite | grep OPENFORWRITE
3. Review the above output: Note that it may be normal for some files to be in OPENFORWRITE up to 1 hour after they have been written to. If after 1 hour nothing is writing to HDFS and the file is still in OPENFORWRITE state, the instructions below should be followed.
4. Create a temporary working directory:
hdfs dfs -mkdir /tmp_working_dir_pivotal/
5. For each file that is stuck in OPENFORWRITE:
a) Move the file to the temp directory:
hdfs dfs -mv /PATH_TO_FILE/STUCK/IN/OPENFORWRITE /tmp_working_dir_pivotal/
b) COPY the file back to the original location. This will force a new inode to be created and will clear up the OPENFORWRITE state:
hdfs dfs -cp /tmp_working_dir_pivotal/<filename/ /PATH_TO_FILE_STUCK/IN/OPENFORWRITE/
c) Once you have confirmed that the file is working correctly, remove the files in /tmp_working_dir_pivotal/.