In this step, let's start by accessing the Hadoop FS Shell and examining the current files and directories in the Hadoop Distributed File System.
-
Open the terminal and switch to the hadoop
user:
su - hadoop
-
Modifying /home/hadoop/hadoop/etc/hadoop/core-site.xml
to enable the Trash feature:
nano /home/hadoop/hadoop/etc/hadoop/core-site.xml
Add the following property between the <configuration>
tags:
<property>
<name>fs.trash.interval</name>
<value>1440</value>
</property>
<property>
<name>fs.trash.checkpoint.interval</name>
<value>1440</value>
</property>
Save the file and exit the text editor.
-
restart the HDFS service:
Stop the HDFS service:
/home/hadoop/hadoop/sbin/stop-dfs.sh
Start the HDFS service:
/home/hadoop/hadoop/sbin/start-dfs.sh
-
Create a file and delete it in the HDFS:
Create a file in the HDFS:
hdfs dfs -touchz /user/hadoop/test.txt
Delete the file:
hdfs dfs -rm /user/hadoop/test.txt
-
Check if the Trash feature is enabled:
hdfs dfs -ls /user/hadoop/.Trash/Current/user/hadoop/
You should see the file you deleted in the Trash directory.