All shuffle files on the slave should be cleaned up when slave lost | 0.39 sec | Passed |
Barrier task failures from a previous stage attempt don't trigger stage retry | 51 ms | Passed |
Barrier task failures from the same stage attempt don't trigger multiple stage retries | 89 ms | Passed |
Completions in zombie tasksets update status of non-zombie taskset | 34 ms | Passed |
Fail the job if a barrier ResultTask failed | 50 ms | Passed |
Failures in different stages should not trigger an overall abort | 0.17 sec | Passed |
Multiple consecutive stage fetch failures should lead to job being aborted | 0.21 sec | Passed |
Non-consecutive stage failures don't trigger abort | 0.41 sec | Passed |
Retry all the tasks on a resubmitted attempt of a barrier stage caused by FetchFailure | 0.12 sec | Passed |
Retry all the tasks on a resubmitted attempt of a barrier stage caused by TaskKilled | 0.17 sec | Passed |
SPARK-17644: After one stage is aborted for too many failed attempts, subsequent stagesstill behave correctly on fetch failures | 2 sec | Passed |
SPARK-23207: cannot rollback a result stage | 35 ms | Passed |
SPARK-23207: local checkpoint fail to rollback (checkpointed before) | 0.25 sec | Passed |
SPARK-23207: local checkpoint fail to rollback (checkpointing now) | 16 ms | Passed |
SPARK-23207: reliable checkpoint can avoid rollback (checkpointed before) | 0.21 sec | Passed |
SPARK-23207: reliable checkpoint fail to rollback (checkpointing now) | 31 ms | Passed |
SPARK-25341: abort stage while using old fetch protocol | 0.46 sec | Passed |
SPARK-25341: continuous indeterminate stage roll back | 0.11 sec | Passed |
SPARK-25341: retry all the succeeding stages when the map stage is indeterminate | 0.12 sec | Passed |
SPARK-28967 properties must be cloned before posting to listener bus for 0 partition | 21 ms | Passed |
SPARK-29042: Sampled RDD with unordered input should be indeterminate | 2 ms | Passed |
SPARK-30388: shuffle fetch failed on speculative task, but original task succeed | 0.46 sec | Passed |
Single stage fetch failure should not abort the stage | 0.15 sec | Passed |
Spark exceptions should include call site in stack trace | 31 ms | Passed |
Trigger mapstage's job listener in submitMissingTasks | 0.12 sec | Passed |
[SPARK-13902] Ensure no duplicate stages are created | 0.14 sec | Passed |
[SPARK-19263] DAGScheduler should not submit multiple active tasksets, even with late completions from earlier stage attempts | 40 ms | Passed |
[SPARK-3353] parent stage should have lower stage id | 0.47 sec | Passed |
accumulator not calculated for resubmitted result stage | 11 ms | Passed |
accumulator not calculated for resubmitted task in result stage | 0.28 sec | Passed |
accumulators are updated on exception failures and task killed | 11 ms | Passed |
avoid exponential blowup when getting preferred locs list | 0.37 sec | Passed |
cache location preferences w/ dependency | 60 ms | Passed |
cached post-shuffle | 0.1 sec | Passed |
catch errors in event loop | 33 ms | Passed |
countApprox on empty RDDs schedules jobs which never complete | 26 ms | Passed |
don't submit stage until its dependencies map outputs are registered (SPARK-5259) | 40 ms | Passed |
equals and hashCode AccumulableInfo | 1 ms | Passed |
extremely late fetch failures don't cause multiple concurrent attempts for the same stage | 40 ms | Passed |
failure of stage used by two jobs | 17 ms | Passed |
getMissingParentStages should consider all ancestor RDDs' cache statuses | 16 ms | Passed |
getPartitions exceptions should not crash DAGScheduler and SparkContext (SPARK-8606) | 0.44 sec | Passed |
getPreferredLocations errors should not crash DAGScheduler and SparkContext (SPARK-8606) | 96 ms | Passed |
getShuffleDependencies correctly returns only direct shuffle parents | 2 ms | Passed |
ignore late map task completions | 27 ms | Passed |
interruptOnCancel should not crash DAGScheduler | 67 ms | Passed |
job cancellation no-kill backend | 30 ms | Passed |
late fetch failures don't cause multiple concurrent attempts for the same map stage | 0.11 sec | Passed |
map stage submission with executor failure late map task completions | 63 ms | Passed |
map stage submission with fetch failure | 71 ms | Passed |
map stage submission with multiple shared stages and failures | 0.2 sec | Passed |
map stage submission with reduce stage also depending on the data | 28 ms | Passed |
misbehaved accumulator should not crash DAGScheduler and SparkContext | 0.29 sec | Passed |
misbehaved accumulator should not impact other accumulators | 0.58 sec | Passed |
misbehaved resultHandler should not crash DAGScheduler and SparkContext | 0.15 sec | Passed |
recursive shuffle failures | 62 ms | Passed |
reduce task locality preferences should only include machines with largest map outputs | 23 ms | Passed |
reduce tasks should be placed locally with map output | 77 ms | Passed |
register map outputs correctly after ExecutorLost and task Resubmitted | 22 ms | Passed |
regression test for getCacheLocs | 9 ms | Passed |
run shuffle with map stage failure | 10 ms | Passed |
run trivial job | 16 ms | Passed |
run trivial job w/ dependency | 9 ms | Passed |
run trivial shuffle | 42 ms | Passed |
run trivial shuffle with fetch failure | 64 ms | Passed |
run trivial shuffle with out-of-band executor failure and retry | 36 ms | Passed |
shuffle fetch failure in a reused shuffle dependency | 62 ms | Passed |
shuffle files lost when executor failure without shuffle service | 0.9 sec | Passed |
shuffle files lost when worker lost with shuffle service | 0.84 sec | Passed |
shuffle files lost when worker lost without shuffle service | 1 sec | Passed |
shuffle files not lost when executor failure with shuffle service | 0.37 sec | Passed |
shuffle files not lost when slave lost with shuffle service | 0.96 sec | Passed |
simple map stage submission | 22 ms | Passed |
stage used by two jobs, some fetch failures, and the first job no longer active (SPARK-6880) | 44 ms | Passed |
stage used by two jobs, the first no longer active (SPARK-6880) | 39 ms | Passed |
stages with both narrow and shuffle dependencies use narrow ones for locality | 23 ms | Passed |
task end event should have updated accumulators (SPARK-20342) | 1.8 sec | Passed |
task events always posted in speculation / when stage is killed | 0.16 sec | Passed |
trivial job cancellation | 6 ms | Passed |
trivial job failure | 13 ms | Passed |
trivial shuffle with multiple fetch failures | 16 ms | Passed |
unserializable task | 10 ms | Passed |
zero split job | 20 ms | Passed |