1. [SPARK-20327][CORE][YARN] Add CLI support for YARN custom resources, (details)
Commit 3946de773498621f88009c309254b019848ed490 by vanzin
[SPARK-20327][CORE][YARN] Add CLI support for YARN custom resources,
like GPUs
## What changes were proposed in this pull request?
This PR adds CLI support for YARN custom resources, e.g. GPUs and any
other resources YARN defines. The custom resources are defined with
Spark properties, no additional CLI arguments were introduced.
The properties can be defined in the following form:
**AM resources, client mode:** Format:
`<resource-name>` The property name follows the
naming convention of YARN AM cores / memory properties:
` and
**Driver resources, cluster mode:** Format:
`spark.yarn.driver.resource.<resource-name>` The property name follows
the naming convention of driver cores / memory properties:
`spark.driver.memory and spark.driver.cores.`
**Executor resources:** Format:
`spark.yarn.executor.resource.<resource-name>` The property name follows
the naming convention of executor cores / memory properties:
`spark.executor.memory / spark.executor.cores`.
For the driver resources (cluster mode) and executor resources
properties, we use the `yarn` prefix here as custom resource types are
specific to YARN, currently.
**Validation:** Please note that a validation logic is added to avoid
having requested resources defined in 2 ways, for example defining the
following configs:
"--conf", "spark.driver.memory=2G",
"--conf", "spark.yarn.driver.resource.memory=1G"
will not start execution and will print an error message.
## How was this patch tested? Unit tests + manual execution with Hadoop2
and Hadoop 3 builds.
Testing have been performed on a real cluster with Spark and YARN
configured: Cluster and client mode Request Resource Types with
lowercase and uppercase units Start Spark job with only requesting
standard resources (mem / cpu) Error handling cases:
- Request unknown resource type
- Request Resource type (either memory / cpu) with duplicate configs at
the same time (e.g. with this config:
--conf \
--conf spark.yarn.driver.resource.memory=2G \
--conf spark.yarn.executor.resource.memory=3G \
), ResourceTypeValidator handles these cases well, so it is not
- Request standard resource (memory / cpu) with the new style configs,
e.g. --conf,  this is not permitted and
handled well.
An example about how I ran the testcases:
``` cd ~;export HADOOP_CONF_DIR=/opt/hadoop/etc/hadoop/;
./spark-2.4.0-SNAPSHOT-bin-custom-spark/bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn \
--deploy-mode cluster \
--driver-memory 1G \
--driver-cores 1 \
--executor-memory 1G \
--executor-cores 1 \
--conf spark.logConf=true \
--conf spark.yarn.executor.resource.gpu=3G \
--verbose \

Closes #20761 from szyszy/SPARK-20327.
Authored-by: Szilard Nemeth <> Signed-off-by:
Marcelo Vanzin <>
The file was addedresource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/ResourceRequestHelper.scala
The file was addedresource-managers/yarn/src/test/scala/org/apache/spark/deploy/yarn/ResourceRequestHelperSuite.scala
The file was modifiedresource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocator.scala (diff)
The file was modifiedresource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/config.scala (diff)
The file was modifiedresource-managers/yarn/src/test/scala/org/apache/spark/deploy/yarn/ClientSuite.scala (diff)
The file was addedresource-managers/yarn/src/test/scala/org/apache/spark/deploy/yarn/ResourceRequestTestHelper.scala
The file was modifieddocs/ (diff)
The file was modifiedresource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala (diff)
The file was modifiedresource-managers/yarn/src/test/scala/org/apache/spark/deploy/yarn/YarnAllocatorSuite.scala (diff)