|Pivotal HD / HDP||3.0.x / 2.x|
Does Spark support YARN HA or HDFS NameNode HA?
If the YARN Resource Manager or HDFS NameNode failover to the standby, will Spark still continue to work correctly?
Spark requires no specific YARN or HDFS configuration. We simply point Spark to the cluster configuration files: /etc/hadoop/conf.
So, if the cluster is configured for YARN OR NameNode HA, the HA configurations will also work in Spark.
If anything fails over, Spark will still continue to work.