amazon web services - How to prevent EMR Spark step from retrying? -


i have aws emr cluster (emr-4.2.0, spark 1.5.2), submitting steps aws cli. problem is, if spark application fails, yarn trying run application again (under same emr step). how can prevent this?

i trying set --conf spark.yarn.maxappattempts=1, correctly set in environment/spark properties, doesn't prevent yarn restarting application.

you should try set spark.task.maxfailures 1 (4 default).

meaning:

number of failures of particular task before giving on job. total number of failures spread across different tasks not cause job fail; particular task has fail number of attempts. should greater or equal 1. number of allowed retries = value - 1.


Comments

Popular posts from this blog

ruby - Trying to change last to "x"s to 23 -

jquery - Clone last and append item to closest class -

c - Unrecognised emulation mode: elf_i386 on MinGW32 -