Pivotal Knowledge Base

Follow

Write data to HDFS via NFS gateway failed with "Input/output error"

Environment

  • PHD 2.1

Problem

  1. Mount HDFS to some point on a client node to NFS gateway
    [root@hdm1 ~]# mount -t nfs -o vers=3,proto=tcp,nolock 192.168.4.33:/ /hdfs
    [root@hdm1 conf]# df -h
    Filesystem Size Used Avail Use% Mounted on
    /dev/mapper/vg_pccadmin-lv_root
    32G 3.1G 27G 11% /
    tmpfs 3.9G 0 3.9G 0% /dev/shm
    /dev/sda1 485M 32M 428M 7% /boot
    192.168.4.33:/ 281G 41G 241G 15% /hdfs [root@hdm1 ~]# ls -l /hdfs
    total 4
    drwxr-xr-x 3 hdfs hadoop 96 Feb 16 17:13 apps
    drwxr-xr-x 8 gpadmin hadoop 256 Feb 16 17:40 hawq_data
    drwxr-xr-x 3 hdfs hadoop 96 Feb 16 17:15 hive
    drwxr-xr-x 3 mapred hadoop 96 Feb 16 17:14 mapred
    drwxrwxrwx 3 hdfs hadoop 96 Feb 24 00:04 tmp
    drwxrwxrwx 4 hdfs hadoop 128 Feb 24 00:04 user
    drwxr-xr-x 3 hdfs hadoop 96 Feb 16 17:15 yarn
  2. When try to copy a file to HDFS via the mount point it fails with "Input/output error"
    [root@hdm1 ~]# cp install.log /hdfs/tmp/
    cp: cannot create regular file `/hdfs/tmp/install.log': Input/output error

Cause

The following error messages is observed in the nfs3 daemon log file

15/02/24 21:26:53 WARN nfs3.RpcProgramNfs3: Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Access time for hdfs is not configured. Please set dfs.namenode.accesstime.precision configuration parameter.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimes(FSNamesystem.java:1908)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setTimes(NameNodeRpcServer.java:920)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setTimes(ClientNamenodeProtocolServerSideTranslatorPB.java:811)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:63071)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)

The error message indicates that configuration parameter dfs.namenode.accesstime.precision is missed or set to zero. This means the client mounts the export with access time update allowed, but the feature is disabled in the configuration file

Fix

Add property dfs.namenode.accesstime.precision to configuration file hdfs-site.xml on NameNode as illustrated below

<property>
  <name>dfs.namenode.accesstime.precision</name>
  <value>3600000</value>
  <description>The access time for HDFS file is precise upto this value. 
    The default value is 1 hour. Setting a value of 0 disables
    access times for HDFS.
  </description>
</property>

Note that this property should be added to both NameNodes if High Availability is enabled. Restart of NameNode is needed to take the change into effect.

Another solution is to disable access time update by mounting the export with "noatime" on some Unix systems if this option is supported for mounting NFS-based export.

Refer to link here for details about setting up HDFS NFS gateway

 

Comments

Powered by Zendesk