Test Result : DAGSchedulerSuite

0 failures (±0)
83 tests (-8)
Took 16 sec.

All Tests

Test nameDurationStatus
All shuffle files on the slave should be cleaned up when slave lost0.39 secPassed
Barrier task failures from a previous stage attempt don't trigger stage retry51 msPassed
Barrier task failures from the same stage attempt don't trigger multiple stage retries89 msPassed
Completions in zombie tasksets update status of non-zombie taskset34 msPassed
Fail the job if a barrier ResultTask failed50 msPassed
Failures in different stages should not trigger an overall abort0.17 secPassed
Multiple consecutive stage fetch failures should lead to job being aborted0.21 secPassed
Non-consecutive stage failures don't trigger abort0.41 secPassed
Retry all the tasks on a resubmitted attempt of a barrier stage caused by FetchFailure0.12 secPassed
Retry all the tasks on a resubmitted attempt of a barrier stage caused by TaskKilled0.17 secPassed
SPARK-17644: After one stage is aborted for too many failed attempts, subsequent stagesstill behave correctly on fetch failures2 secPassed
SPARK-23207: cannot rollback a result stage35 msPassed
SPARK-23207: local checkpoint fail to rollback (checkpointed before)0.25 secPassed
SPARK-23207: local checkpoint fail to rollback (checkpointing now)16 msPassed
SPARK-23207: reliable checkpoint can avoid rollback (checkpointed before)0.21 secPassed
SPARK-23207: reliable checkpoint fail to rollback (checkpointing now)31 msPassed
SPARK-25341: abort stage while using old fetch protocol0.46 secPassed
SPARK-25341: continuous indeterminate stage roll back0.11 secPassed
SPARK-25341: retry all the succeeding stages when the map stage is indeterminate0.12 secPassed
SPARK-28967 properties must be cloned before posting to listener bus for 0 partition21 msPassed
SPARK-29042: Sampled RDD with unordered input should be indeterminate2 msPassed
SPARK-30388: shuffle fetch failed on speculative task, but original task succeed0.46 secPassed
Single stage fetch failure should not abort the stage0.15 secPassed
Spark exceptions should include call site in stack trace31 msPassed
Trigger mapstage's job listener in submitMissingTasks0.12 secPassed
[SPARK-13902] Ensure no duplicate stages are created0.14 secPassed
[SPARK-19263] DAGScheduler should not submit multiple active tasksets, even with late completions from earlier stage attempts40 msPassed
[SPARK-3353] parent stage should have lower stage id0.47 secPassed
accumulator not calculated for resubmitted result stage11 msPassed
accumulator not calculated for resubmitted task in result stage0.28 secPassed
accumulators are updated on exception failures and task killed11 msPassed
avoid exponential blowup when getting preferred locs list0.37 secPassed
cache location preferences w/ dependency60 msPassed
cached post-shuffle0.1 secPassed
catch errors in event loop33 msPassed
countApprox on empty RDDs schedules jobs which never complete26 msPassed
don't submit stage until its dependencies map outputs are registered (SPARK-5259)40 msPassed
equals and hashCode AccumulableInfo1 msPassed
extremely late fetch failures don't cause multiple concurrent attempts for the same stage40 msPassed
failure of stage used by two jobs17 msPassed
getMissingParentStages should consider all ancestor RDDs' cache statuses16 msPassed
getPartitions exceptions should not crash DAGScheduler and SparkContext (SPARK-8606)0.44 secPassed
getPreferredLocations errors should not crash DAGScheduler and SparkContext (SPARK-8606)96 msPassed
getShuffleDependencies correctly returns only direct shuffle parents2 msPassed
ignore late map task completions27 msPassed
interruptOnCancel should not crash DAGScheduler67 msPassed
job cancellation no-kill backend30 msPassed
late fetch failures don't cause multiple concurrent attempts for the same map stage0.11 secPassed
map stage submission with executor failure late map task completions63 msPassed
map stage submission with fetch failure71 msPassed
map stage submission with multiple shared stages and failures0.2 secPassed
map stage submission with reduce stage also depending on the data28 msPassed
misbehaved accumulator should not crash DAGScheduler and SparkContext0.29 secPassed
misbehaved accumulator should not impact other accumulators0.58 secPassed
misbehaved resultHandler should not crash DAGScheduler and SparkContext0.15 secPassed
recursive shuffle failures62 msPassed
reduce task locality preferences should only include machines with largest map outputs23 msPassed
reduce tasks should be placed locally with map output77 msPassed
register map outputs correctly after ExecutorLost and task Resubmitted22 msPassed
regression test for getCacheLocs9 msPassed
run shuffle with map stage failure10 msPassed
run trivial job16 msPassed
run trivial job w/ dependency9 msPassed
run trivial shuffle42 msPassed
run trivial shuffle with fetch failure64 msPassed
run trivial shuffle with out-of-band executor failure and retry36 msPassed
shuffle fetch failure in a reused shuffle dependency62 msPassed
shuffle files lost when executor failure without shuffle service0.9 secPassed
shuffle files lost when worker lost with shuffle service0.84 secPassed
shuffle files lost when worker lost without shuffle service1 secPassed
shuffle files not lost when executor failure with shuffle service0.37 secPassed
shuffle files not lost when slave lost with shuffle service0.96 secPassed
simple map stage submission22 msPassed
stage used by two jobs, some fetch failures, and the first job no longer active (SPARK-6880)44 msPassed
stage used by two jobs, the first no longer active (SPARK-6880)39 msPassed
stages with both narrow and shuffle dependencies use narrow ones for locality23 msPassed
task end event should have updated accumulators (SPARK-20342)1.8 secPassed
task events always posted in speculation / when stage is killed0.16 secPassed
trivial job cancellation6 msPassed
trivial job failure13 msPassed
trivial shuffle with multiple fetch failures16 msPassed
unserializable task10 msPassed
zero split job20 msPassed