Test Result : DAGSchedulerSuite

0 failures (±0)
91 tests (±0)
Took 4.4 sec.

All Tests

Test nameDurationStatus
All shuffle files on the slave should be cleaned up when slave lost0.1 secPassed
Barrier task failures from a previous stage attempt don't trigger stage retry12 msPassed
Barrier task failures from the same stage attempt don't trigger multiple stage retries13 msPassed
Completions in zombie tasksets update status of non-zombie taskset13 msPassed
Fail the job if a barrier ResultTask failed18 msPassed
Failures in different stages should not trigger an overall abort75 msPassed
Multiple consecutive stage fetch failures should lead to job being aborted25 msPassed
Non-consecutive stage failures don't trigger abort58 msPassed
Retry all the tasks on a resubmitted attempt of a barrier stage caused by FetchFailure36 msPassed
Retry all the tasks on a resubmitted attempt of a barrier stage caused by TaskKilled22 msPassed
SPARK-17644: After one stage is aborted for too many failed attempts, subsequent stagesstill behave correctly on fetch failures1.3 secPassed
SPARK-23207: cannot rollback a result stage7 msPassed
SPARK-23207: local checkpoint fail to rollback (checkpointed before)20 msPassed
SPARK-23207: local checkpoint fail to rollback (checkpointing now)8 msPassed
SPARK-23207: reliable checkpoint can avoid rollback (checkpointed before)62 msPassed
SPARK-23207: reliable checkpoint fail to rollback (checkpointing now)18 msPassed
SPARK-25341: abort stage while using old fetch protocol79 msPassed
SPARK-25341: continuous indeterminate stage roll back22 msPassed
SPARK-25341: retry all the succeeding stages when the map stage is indeterminate26 msPassed
SPARK-28967 properties must be cloned before posting to listener bus for 0 partition13 msPassed
SPARK-29042: Sampled RDD with unordered input should be indeterminate1 msPassed
SPARK-30388: shuffle fetch failed on speculative task, but original task succeed0.42 secPassed
Single stage fetch failure should not abort the stage30 msPassed
Spark exceptions should include call site in stack trace23 msPassed
Trigger mapstage's job listener in submitMissingTasks15 msPassed
[SPARK-13902] Ensure no duplicate stages are created19 msPassed
[SPARK-19263] DAGScheduler should not submit multiple active tasksets, even with late completions from earlier stage attempts19 msPassed
[SPARK-3353] parent stage should have lower stage id0.14 secPassed
accumulator not calculated for resubmitted result stage6 msPassed
accumulator not calculated for resubmitted task in result stage5 msPassed
accumulators are updated on exception failures and task killed6 msPassed
avoid exponential blowup when getting preferred locs list94 msPassed
cache location preferences w/ dependency7 msPassed
cached post-shuffle23 msPassed
catch errors in event loop5 msPassed
countApprox on empty RDDs schedules jobs which never complete7 msPassed
don't submit stage until its dependencies map outputs are registered (SPARK-5259)18 msPassed
equals and hashCode AccumulableInfo0 msPassed
extremely late fetch failures don't cause multiple concurrent attempts for the same stage22 msPassed
failure of stage used by two jobs7 msPassed
getMissingParentStages should consider all ancestor RDDs' cache statuses5 msPassed
getPartitions exceptions should not crash DAGScheduler and SparkContext (SPARK-8606)34 msPassed
getPreferredLocations errors should not crash DAGScheduler and SparkContext (SPARK-8606)22 msPassed
getShuffleDependenciesAndResourceProfiles correctly returns only direct shuffle parents1 msPassed
getShuffleDependenciesAndResourceProfiles returns deps and profiles correctly2 msPassed
ignore late map task completions11 msPassed
interruptOnCancel should not crash DAGScheduler42 msPassed
job cancellation no-kill backend9 msPassed
late fetch failures don't cause multiple concurrent attempts for the same map stage12 msPassed
map stage submission with executor failure late map task completions11 msPassed
map stage submission with fetch failure21 msPassed
map stage submission with multiple shared stages and failures0.14 secPassed
map stage submission with reduce stage also depending on the data10 msPassed
misbehaved accumulator should not crash DAGScheduler and SparkContext32 msPassed
misbehaved accumulator should not impact other accumulators23 msPassed
misbehaved resultHandler should not crash DAGScheduler and SparkContext72 msPassed
recursive shuffle failures19 msPassed
reduce task locality preferences should only include machines with largest map outputs9 msPassed
reduce tasks should be placed locally with map output9 msPassed
register map outputs correctly after ExecutorLost and task Resubmitted12 msPassed
regression test for getCacheLocs1 msPassed
run shuffle with map stage failure15 msPassed
run trivial job4 msPassed
run trivial job w/ dependency5 msPassed
run trivial shuffle10 msPassed
run trivial shuffle with fetch failure18 msPassed
run trivial shuffle with out-of-band executor failure and retry12 msPassed
shuffle fetch failure in a reused shuffle dependency14 msPassed
shuffle files lost when executor failure without shuffle service87 msPassed
shuffle files lost when worker lost with shuffle service73 msPassed
shuffle files lost when worker lost without shuffle service69 msPassed
shuffle files not lost when executor failure with shuffle service72 msPassed
shuffle files not lost when slave lost with shuffle service79 msPassed
simple map stage submission15 msPassed
stage used by two jobs, some fetch failures, and the first job no longer active (SPARK-6880)20 msPassed
stage used by two jobs, the first no longer active (SPARK-6880)12 msPassed
stages with both narrow and shuffle dependencies use narrow ones for locality8 msPassed
task end event should have updated accumulators (SPARK-20342)0.22 secPassed
task events always posted in speculation / when stage is killed38 msPassed
test 1 resource profile6 msPassed
test 2 resource profile with merge conflict config true76 msPassed
test 2 resource profiles errors by default5 msPassed
test default resource profile5 msPassed
test merge 2 resource profiles multiple configs1 msPassed
test merge 3 resource profiles69 msPassed
test multiple resource profiles created from merging use same rp82 msPassed
trivial job cancellation6 msPassed
trivial job failure4 msPassed
trivial shuffle with multiple fetch failures11 msPassed
unserializable task4 msPassed
zero split job4 msPassed