FailedConsole Output

Skipping 1,350 KB.. Full Log
ker lost with shuffle service
- shuffle files lost when worker lost without shuffle service
- shuffle files not lost when executor failure with shuffle service
- shuffle files lost when executor failure without shuffle service
- Single stage fetch failure should not abort the stage.
- Multiple consecutive stage fetch failures should lead to job being aborted.
- Failures in different stages should not trigger an overall abort
- Non-consecutive stage failures don't trigger abort
- trivial shuffle with multiple fetch failures
- Retry all the tasks on a resubmitted attempt of a barrier stage caused by FetchFailure
- Retry all the tasks on a resubmitted attempt of a barrier stage caused by TaskKilled *** FAILED ***
  ArrayBuffer(0) did not equal List(0) (DAGSchedulerSuite.scala:1132)
- Fail the job if a barrier ResultTask failed
- late fetch failures don't cause multiple concurrent attempts for the same map stage
- extremely late fetch failures don't cause multiple concurrent attempts for the same stage
- task events always posted in speculation / when stage is killed
- ignore late map task completions
- run shuffle with map stage failure
- shuffle fetch failure in a reused shuffle dependency
- don't submit stage until its dependencies map outputs are registered (SPARK-5259)
- register map outputs correctly after ExecutorLost and task Resubmitted
- failure of stage used by two jobs
- stage used by two jobs, the first no longer active (SPARK-6880)
- stage used by two jobs, some fetch failures, and the first job no longer active (SPARK-6880)
- run trivial shuffle with out-of-band executor failure and retry
- recursive shuffle failures
- cached post-shuffle
- misbehaved accumulator should not crash DAGScheduler and SparkContext
- misbehaved accumulator should not impact other accumulators
- misbehaved resultHandler should not crash DAGScheduler and SparkContext
- invalid spark.job.interruptOnCancel should not crash DAGScheduler
- getPartitions exceptions should not crash DAGScheduler and SparkContext (SPARK-8606)
- getPreferredLocations errors should not crash DAGScheduler and SparkContext (SPARK-8606)
- accumulator not calculated for resubmitted result stage
- accumulator not calculated for resubmitted task in result stage
- accumulators are updated on exception failures and task killed
- reduce tasks should be placed locally with map output
- reduce task locality preferences should only include machines with largest map outputs
- stages with both narrow and shuffle dependencies use narrow ones for locality
- Spark exceptions should include call site in stack trace
- catch errors in event loop
- simple map stage submission
- map stage submission with reduce stage also depending on the data
- map stage submission with fetch failure
- map stage submission with multiple shared stages and failures
- Trigger mapstage's job listener in submitMissingTasks
- map stage submission with executor failure late map task completions
- getShuffleDependencies correctly returns only direct shuffle parents
- SPARK-17644: After one stage is aborted for too many failed attempts, subsequent stagesstill behave correctly on fetch failures
- [SPARK-19263] DAGScheduler should not submit multiple active tasksets, even with late completions from earlier stage attempts
- task end event should have updated accumulators (SPARK-20342)
- Barrier task failures from the same stage attempt don't trigger multiple stage retries
- Barrier task failures from a previous stage attempt don't trigger stage retry
- SPARK-23207: retry all the succeeding stages when the map stage is indeterminate
- SPARK-23207: cannot rollback a result stage
- SPARK-23207: local checkpoint fail to rollback (checkpointed before)
- SPARK-23207: local checkpoint fail to rollback (checkpointing now)
- SPARK-23207: reliable checkpoint can avoid rollback (checkpointed before)
- SPARK-23207: reliable checkpoint fail to rollback (checkpointing now)
- SPARK-27164: RDD.countApprox on empty RDDs schedules jobs which never complete
FsHistoryProviderSuite:
- Parse application logs (inMemory = true)
- Parse application logs (inMemory = false)
- SPARK-3697: ignore files that cannot be read.
- history file is renamed from inprogress to completed
- Parse logs that application is not started
- SPARK-5582: empty log directory
- apps with multiple attempts with order
- log urls without customization
- custom log urls, including FILE_NAME
- custom log urls, excluding FILE_NAME
- custom log urls with invalid attribute
- custom log urls, LOG_FILES not available while FILE_NAME is specified
- custom log urls, app not finished, applyIncompleteApplication: true
- custom log urls, app not finished, applyIncompleteApplication: false
- log cleaner
- should not clean inprogress application with lastUpdated time less than maxTime
- log cleaner for inProgress files
- Event log copy
- driver log cleaner
- SPARK-8372: new logs with no app ID are ignored
- provider correctly checks whether fs is in safe mode
- provider waits for safe mode to finish before initializing
- provider reports error after FS leaves safe mode
- ignore hidden files
- support history server ui admin acls
- mismatched version discards old listing
- invalidate cached UI
- clean up stale app information
- SPARK-21571: clean up removes invalid history files
- always find end event for finished apps
- parse event logs with optimizations off
- SPARK-24948: blacklist files we don't have read permission on
- check in-progress event logs absolute length
RDDSuite:
- basic operations
- serialization
- distinct with known partitioner preserves partitioning
- countApproxDistinct
- SparkContext.union
- SparkContext.union parallel partition listing
- SparkContext.union creates UnionRDD if at least one RDD has no partitioner
- SparkContext.union creates PartitionAwareUnionRDD if all RDDs have partitioners
- PartitionAwareUnionRDD raises exception if at least one RDD has no partitioner
- SPARK-23778: empty RDD in union should not produce a UnionRDD
- partitioner aware union
- UnionRDD partition serialized size should be small
- fold
- fold with op modifying first arg
- aggregate
- treeAggregate
- treeAggregate with ops modifying first args
- treeReduce
- basic caching
- caching with failures
- empty RDD
- repartitioned RDDs
- repartitioned RDDs perform load balancing
- coalesced RDDs
- coalesced RDDs with locality
- coalesced RDDs with partial locality
- coalesced RDDs with locality, large scale (10K partitions)
- coalesced RDDs with partial locality, large scale (10K partitions)
- coalesced RDDs with locality, fail first pass
- zipped RDDs
- partition pruning
- collect large number of empty partitions
- take
- top with predefined ordering
- top with custom ordering
- takeOrdered with predefined ordering
- takeOrdered with limit 0
- takeOrdered with custom ordering
- isEmpty
- sample preserves partitioner
- takeSample
- takeSample from an empty rdd
- randomSplit
- runJob on an invalid partition
- sort an empty RDD
- sortByKey
- sortByKey ascending parameter
- sortByKey with explicit ordering
- repartitionAndSortWithinPartitions
- cartesian on empty RDD
- cartesian on non-empty RDDs
- intersection
- intersection strips duplicates in an input
- zipWithIndex
- zipWithIndex with a single partition
- zipWithIndex chained with other RDDs (SPARK-4433)
- zipWithUniqueId
- retag with implicit ClassTag
- parent method
- getNarrowAncestors
- getNarrowAncestors with multiple parents
- getNarrowAncestors with cycles
- task serialization exception should not hang scheduler
- RDD.partitions() fails fast when partitions indicies are incorrect (SPARK-13021)
- nested RDDs are not supported (SPARK-5063)
- actions cannot be performed inside of transformations (SPARK-5063)
- custom RDD coalescer
- SPARK-18406: race between end-of-task and completion iterator read lock release
- SPARK-23496: order of input partitions can result in severe skew in coalesce
- cannot run actions after SparkContext has been stopped (SPARK-5063)
- cannot call methods on a stopped SparkContext (SPARK-5063)
ExecutorSuite:
- SPARK-15963: Catch `TaskKilledException` correctly in Executor.TaskRunner
- SPARK-19276: Handle FetchFailedExceptions that are hidden by user exceptions
- Executor's worker threads should be UninterruptibleThread
- SPARK-19276: OOMs correctly handled with a FetchFailure
- SPARK-23816: interrupts are not masked by a FetchFailure
- Gracefully handle error in task deserialization
- Heartbeat should drop zero accumulator updates
- Heartbeat should not drop zero accumulator updates when the conf is disabled
SerDeUtilSuite:
- Converting an empty pair RDD to python does not throw an exception (SPARK-5441)
- Converting an empty python RDD to pair RDD does not throw an exception (SPARK-5441)
UtilsSuite:
- timeConversion
- Test byteString conversion
- bytesToString
- copyStream
- copyStreamUpTo
- memoryStringToMb
- splitCommandString
- string formatting of time durations
- reading offset bytes of a file
- reading offset bytes of a file (compressed)
- reading offset bytes across multiple files
- reading offset bytes across multiple files (compressed)
- deserialize long value
- writeByteBuffer should not change ByteBuffer position
- get iterator size
- getIteratorZipWithIndex
- doesDirectoryContainFilesNewerThan
- resolveURI
- resolveURIs with multiple paths
- nonLocalPaths
- isBindCollision
- log4j log level change
- deleteRecursively
- loading properties from file
- timeIt with prepare
- fetch hcfs dir
- shutdown hook manager
- isInDirectory
- circular buffer: if nothing was written to the buffer, display nothing
- circular buffer: if the buffer isn't full, print only the contents written
- circular buffer: data written == size of the buffer
- circular buffer: multiple overflow
- nanSafeCompareDoubles
- nanSafeCompareFloats
- isDynamicAllocationEnabled
- getDynamicAllocationInitialExecutors
- Set Spark CallerContext
- encodeFileNameToURIRawPath
- decodeFileNameInURI
- Kill process
- chi square test of randomizeInPlace
- redact sensitive information
- redact sensitive information in command line args
- tryWithSafeFinally
- tryWithSafeFinallyAndFailureCallbacks
- load extensions
- check Kubernetes master URL
- stringHalfWidth
- trimExceptCRLF standalone
PagedDataSourceSuite:
- basic
SortingSuite:
- sortByKey
- large array
- large array with one split
- large array with many partitions
- sort descending
- sort descending with one split
- sort descending with many partitions
- more partitions than elements
- empty RDD
- partition balancing
- partition balancing for descending sort
- get a range of elements in a sorted RDD that is on one partition
- get a range of elements over multiple partitions in a descendingly sorted RDD
- get a range of elements in an array not partitioned by a range partitioner
- get a range of elements over multiple partitions but not taking up full partitions
RpcAddressSuite:
- hostPort
- fromSparkURL
- fromSparkURL: a typo url
- fromSparkURL: invalid scheme
- toSparkURL
JavaSerializerSuite:
- JavaSerializer instances are serializable
- Deserialize object containing a primitive Class as attribute
LocalDirsSuite:
- Utils.getLocalDir() returns a valid directory, even if some local dirs are missing
- SPARK_LOCAL_DIRS override also affects driver
- Utils.getLocalDir() throws an exception if any temporary directory cannot be retrieved
TaskContextSuite:
- provide metrics sources
- calls TaskCompletionListener after failure
- calls TaskFailureListeners after failure
- all TaskCompletionListeners should be called even if some fail
- all TaskFailureListeners should be called even if some fail
- TaskContext.attemptNumber should return attempt number, not task id (SPARK-4014)
- TaskContext.stageAttemptNumber getter
- accumulators are updated on exception failures
- failed tasks collect only accumulators whose values count during failures
- only updated internal accumulators will be sent back to driver
- localProperties are propagated to executors correctly
- immediately call a completion listener if the context is completed
- immediately call a failure listener if the context has failed
- TaskCompletionListenerException.getMessage should include previousError
- all TaskCompletionListeners should be called even if some fail or a task
HistoryServerSuite:
- application list json
- completed app list json
- running app list json
- minDate app list json
- maxDate app list json
- maxDate2 app list json
- minEndDate app list json
- maxEndDate app list json
- minEndDate and maxEndDate app list json
- minDate and maxEndDate app list json
- limit app list json
- one app json
- one app multi-attempt json
- job list json
- job list from multi-attempt app json(1)
- job list from multi-attempt app json(2)
- one job json
- succeeded job list json
- succeeded&failed job list json
- executor list json
- executor list with executor metrics json
- executor list with executor process tree metrics json
- executor list with executor garbage collection metrics json
- stage list json
- complete stage list json
- failed stage list json
- one stage json
- one stage attempt json
- stage task summary w shuffle write
- stage task summary w shuffle read
- stage task summary w/ custom quantiles
- stage task list
- stage task list w/ offset & length
- stage task list w/ sortBy
- stage task list w/ sortBy short names: -runtime
- stage task list w/ sortBy short names: runtime
- stage list with accumulable json
- stage with accumulable json
- stage task list from multi-attempt app json(1)
- stage task list from multi-attempt app json(2)
- blacklisting for stage
- blacklisting node for stage
- rdd list storage json
- executor node blacklisting
- executor node blacklisting unblacklisting
- executor memory usage
- app environment
- download all logs for app with multiple attempts
- download one log for app with multiple attempts
- response codes on bad paths
- automatically retrieve uiRoot from request through Knox
- static relative links are prefixed with uiRoot (spark.ui.proxyBase)
- /version api endpoint
- ajax rendered relative links are prefixed with uiRoot (spark.ui.proxyBase)
- security manager starts with spark.authenticate set
- incomplete apps get refreshed
- ui and api authorization checks
NextIteratorSuite:
- one iteration
- two iterations
- empty iteration
- close is called once for empty iterations
- close is called once for non-empty iterations
ParallelCollectionSplitSuite:
- one element per slice
- one slice
- equal slices
- non-equal slices
- splitting exclusive range
- splitting inclusive range
- empty data
- zero slices
- negative number of slices
- exclusive ranges sliced into ranges
- inclusive ranges sliced into ranges
- identical slice sizes between Range and NumericRange
- identical slice sizes between List and NumericRange
- large ranges don't overflow
- random array tests
- random exclusive range tests
- random inclusive range tests
- exclusive ranges of longs
- inclusive ranges of longs
- exclusive ranges of doubles
- inclusive ranges of doubles
- inclusive ranges with Int.MaxValue and Int.MinValue
- empty ranges with Int.MaxValue and Int.MinValue
UISeleniumSuite:
- effects of unpersist() / persist() should be reflected
- failed stages should not appear to be active
- spark.ui.killEnabled should properly control kill button display
- jobs page should not display job group name unless some job was submitted in a job group
- job progress bars should handle stage / task failures
- job details page should display useful information for stages that haven't started
- job progress bars / cells reflect skipped stages / tasks
- stages that aren't run appear as 'skipped stages' after a job finishes
- jobs with stages that are skipped should show correct link descriptions on all jobs page
- attaching and detaching a new tab
- kill stage POST/GET response is correct
- kill job POST/GET response is correct
- stage & job retention
- live UI json application list
- job stages should have expected dotfile under DAG visualization
- stages page should show skipped stages
- Staleness of Spark UI should not last minutes or hours
HadoopDelegationTokenManagerSuite:
- default configuration
- disable hadoopfs credential provider
- using deprecated configurations
RandomBlockReplicationPolicyBehavior:
- block replication - random block replication policy
ExecutorRunnerTest:
- command includes appId
BlockTransferServiceSuite:
- fetchBlockSync should not hang when BlockFetchingListener.onBlockFetchSuccess fails
EventLoggingListenerSuite:
- Verify log file exist
- Basic event logging
- Basic event logging with compression
- End-to-end event logging
- End-to-end event logging with compression
- Event logging with password redaction
- Log overwriting
- Event log name
- Executor metrics update
DriverRunnerTest:
- Process succeeds instantly
- Process failing several times and then succeeding
- Process doesn't restart if not supervised
- Process doesn't restart if killed
- Reset of backoff counter
- Kill process finalized with state KILLED
- Finalized with state FINISHED
- Finalized with state FAILED
- Handle exception starting process
PrefixComparatorsSuite:
- String prefix comparator
- Binary prefix comparator
- double prefix comparator handles NaNs properly
- double prefix comparator handles negative NaNs properly
- double prefix comparator handles other special values properly
NettyBlockTransferSecuritySuite:
- security default off
- security on same password
- security on mismatch password
- security mismatch auth off on server
- security mismatch auth off on client
- security with aes encryption
CommandUtilsSuite:
- set libraryPath correctly
- auth secret shouldn't appear in java opts
PairRDDFunctionsSuite:
- aggregateByKey
- groupByKey
- groupByKey with duplicates
- groupByKey with negative key hash codes
- groupByKey with many output partitions
- sampleByKey
- sampleByKeyExact
- reduceByKey
- reduceByKey with collectAsMap
- reduceByKey with many output partitions
- reduceByKey with partitioner
- countApproxDistinctByKey
- join
- join all-to-all
- leftOuterJoin
- cogroup with empty RDD
- cogroup with groupByed RDD having 0 partitions
- cogroup between multiple RDD with an order of magnitude difference in number of partitions
- cogroup between multiple RDD with number of partitions similar in order of magnitude
- cogroup between multiple RDD when defaultParallelism is set without proper partitioner
- cogroup between multiple RDD when defaultParallelism is set with proper partitioner
- cogroup between multiple RDD when defaultParallelism is set; with huge number of partitions in upstream RDDs
- rightOuterJoin
- fullOuterJoin
- join with no matches
- join with many output partitions
- groupWith
- groupWith3
- groupWith4
- zero-partition RDD
- keys and values
- default partitioner uses partition size
- default partitioner uses largest partitioner
- subtract
- subtract with narrow dependency
- subtractByKey
- subtractByKey with narrow dependency
- foldByKey
- foldByKey with mutable result type
- saveNewAPIHadoopFile should call setConf if format is configurable
- The JobId on the driver and executors should be the same during the commit
- saveAsHadoopFile should respect configured output committers
- failure callbacks should be called before calling writer.close() in saveNewAPIHadoopFile
- failure callbacks should be called before calling writer.close() in saveAsHadoopFile
- saveAsNewAPIHadoopDataset should support invalid output paths when there are no files to be committed to an absolute output location
- saveAsHadoopDataset should respect empty output directory when there are no files to be committed to an absolute output location
- lookup
- lookup with partitioner
- lookup with bad partitioner
RBackendSuite:
- close() clears jvmObjectTracker
PrimitiveVectorSuite:
- primitive value
- non-primitive value
- ideal growth
- ideal size
- resizing
MetricsConfigSuite:
- MetricsConfig with default properties
- MetricsConfig with properties set from a file
- MetricsConfig with properties set from a Spark configuration
- MetricsConfig with properties set from a file and a Spark configuration
- MetricsConfig with subProperties
PartiallySerializedBlockSuite:
- valuesIterator() and finishWritingToStream() cannot be called after discard() is called
- discard() can be called more than once
- cannot call valuesIterator() more than once
- cannot call finishWritingToStream() more than once
- cannot call finishWritingToStream() after valuesIterator()
- cannot call valuesIterator() after finishWritingToStream()
- buffers are deallocated in a TaskCompletionListener
- basic numbers with discard() and numBuffered = 50
- basic numbers with finishWritingToStream() and numBuffered = 50
- basic numbers with valuesIterator() and numBuffered = 50
- basic numbers with discard() and numBuffered = 0
- basic numbers with finishWritingToStream() and numBuffered = 0
- basic numbers with valuesIterator() and numBuffered = 0
- basic numbers with discard() and numBuffered = 1000
- basic numbers with finishWritingToStream() and numBuffered = 1000
- basic numbers with valuesIterator() and numBuffered = 1000
- case classes with discard() and numBuffered = 50
- case classes with finishWritingToStream() and numBuffered = 50
- case classes with valuesIterator() and numBuffered = 50
- case classes with discard() and numBuffered = 0
- case classes with finishWritingToStream() and numBuffered = 0
- case classes with valuesIterator() and numBuffered = 0
- case classes with discard() and numBuffered = 1000
- case classes with finishWritingToStream() and numBuffered = 1000
- case classes with valuesIterator() and numBuffered = 1000
- empty iterator with discard() and numBuffered = 0
- empty iterator with finishWritingToStream() and numBuffered = 0
- empty iterator with valuesIterator() and numBuffered = 0
SparkContextSchedulerCreationSuite:
- bad-master
- local
- local-*
- local-n
- local-*-n-failures
- local-n-failures
- bad-local-n
- bad-local-n-failures
- local-default-parallelism
- local-cluster
SerializationDebuggerSuite:
- primitives, strings, and nulls
- primitive arrays
- non-primitive arrays
- serializable object
- nested arrays
- nested objects
- cycles (should not loop forever)
- root object not serializable
- array containing not serializable element
- object containing not serializable field
- externalizable class writing out not serializable object
- externalizable class writing out serializable objects
- object containing writeReplace() which returns not serializable object
- object containing writeReplace() which returns serializable object
- no infinite loop with writeReplace() which returns class of its own type
- object containing writeObject() and not serializable field
- object containing writeObject() and serializable field
- object of serializable subclass with more fields than superclass (SPARK-7180)
- crazy nested objects
- improveException
- improveException with error in debugger
LoggingSuite:
- spark-shell logging filter
NettyRpcHandlerSuite:
- receive
- connectionTerminated
SamplingUtilsSuite:
- reservoirSampleAndCount
- SPARK-18678 reservoirSampleAndCount with tiny input
- computeFraction
TimeStampedHashMapSuite:
- HashMap - basic test
- TimeStampedHashMap - basic test
- TimeStampedHashMap - threading safety test
- TimeStampedHashMap - clearing by timestamp
RandomSamplerSuite:
- utilities
- sanity check medianKSD against references
- bernoulli sampling
- bernoulli sampling without iterator
- bernoulli sampling with gap sampling optimization
- bernoulli sampling (without iterator) with gap sampling optimization
- bernoulli boundary cases
- bernoulli (without iterator) boundary cases
- bernoulli data types
- bernoulli clone
- bernoulli set seed
- replacement sampling
- replacement sampling without iterator
- replacement sampling with gap sampling
- replacement sampling (without iterator) with gap sampling
- replacement boundary cases
- replacement (without) boundary cases
- replacement data types
- replacement clone
- replacement set seed
- bernoulli partitioning sampling
- bernoulli partitioning sampling without iterator
- bernoulli partitioning boundary cases
- bernoulli partitioning (without iterator) boundary cases
- bernoulli partitioning data
- bernoulli partitioning clone
ChunkedByteBufferOutputStreamSuite:
- empty output
- write a single byte
- write a single near boundary
- write a single at boundary
- single chunk output
- single chunk output at boundary size
- multiple chunk output
- multiple chunk output at boundary size
ProcfsMetricsGetterSuite:
- testGetProcessInfo
SparkSubmitUtilsSuite:
- incorrect maven coordinate throws error
- create repo resolvers
- create additional resolvers
:: loading settings :: url = jar:file:/home/jenkins/.m2/repository/org/apache/ivy/ivy/2.4.0/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
- add dependencies works correctly
- excludes works correctly
- ivy path works correctly
- search for artifact at local repositories
- dependency not found throws RuntimeException
- neglects Spark and Spark's dependencies
- exclude dependencies end to end
:: loading settings :: file = /home/jenkins/workspace/spark-master-test-maven-hadoop-2.7-ubuntu-testing/core/target/tmp/ivy-84a7e274-d870-49e1-a4d4-5b1d40058e05/ivysettings.xml
- load ivy settings file
- SPARK-10878: test resolution files cleaned after resolving artifact
ImplicitOrderingSuite:
- basic inference of Orderings
TaskMetricsSuite:
- mutating values
- mutating shuffle read metrics values
- mutating shuffle write metrics values
- mutating input metrics values
- mutating output metrics values
- merging multiple shuffle read metrics
- additional accumulables
ExternalShuffleServiceSuite:
- groupByKey without compression
- shuffle non-zero block size
- shuffle serializer
- zero sized blocks
- zero sized blocks without kryo
- shuffle on mutable pairs
- sorting on mutable pairs
- cogroup using mutable pairs
- subtract mutable pairs
- sort with Java non serializable class - Kryo
- sort with Java non serializable class - Java
- shuffle with different compression settings (SPARK-3426)
- [SPARK-4085] rerun map stage if reduce stage cannot find its local shuffle file
- cannot find its local shuffle file if no execution of the stage and rerun shuffle
- metrics for shuffle without aggregation
- metrics for shuffle with aggregation
- multiple simultaneous attempts for one task (SPARK-8029)
- using external shuffle service
ClosureCleanerSuite:
- closures inside an object
- closures inside a class
- closures inside a class with no default constructor
- closures that don't use fields of the outer class
- nested closures inside an object
- nested closures inside a class
- toplevel return statements in closures are identified at cleaning time
- return statements from named functions nested in closures don't raise exceptions
- user provided closures are actually cleaned
- createNullValue
UnpersistSuite:
- unpersist RDD
PeriodicRDDCheckpointerSuite:
- Persisting
- Checkpointing
TaskSetManagerSuite:
- TaskSet with no preferences
- multiple offers with no preferences
- skip unsatisfiable locality levels
- basic delay scheduling
- we do not need to delay scheduling when we only have noPref tasks in the queue
- delay scheduling with fallback
- delay scheduling with failed hosts
- task result lost
- repeated failures lead to task set abortion
- executors should be blacklisted after task failure, in spite of locality preferences
- new executors get added and lost
- Executors exit for reason unrelated to currently running tasks
- test RACK_LOCAL tasks
- do not emit warning when serialized task is small
- emit warning when serialized task is large
- Not serializable exception thrown if the task cannot be serialized
- abort the job if total size of results is too large
- [SPARK-13931] taskSetManager should not send Resubmitted tasks after being a zombie
- [SPARK-22074] Task killed by other attempt task should not be resubmitted
- speculative and noPref task should be scheduled after node-local
- node-local tasks should be scheduled right away when there are only node-local and no-preference tasks
- SPARK-4939: node-local tasks should be scheduled right after process-local tasks finished
- SPARK-4939: no-pref tasks should be scheduled after process-local tasks finished
- Ensure TaskSetManager is usable after addition of levels
- Test that locations with HDFSCacheTaskLocation are treated as PROCESS_LOCAL.
- Test TaskLocation for different host type.
- Kill other task attempts when one attempt belonging to the same task succeeds
- Killing speculative tasks does not count towards aborting the taskset
- SPARK-19868: DagScheduler only notified of taskEnd when state is ready
- SPARK-17894: Verify TaskSetManagers for different stage attempts have unique names
- don't update blacklist for shuffle-fetch failures, preemption, denied commits, or killed tasks
- update application blacklist for shuffle-fetch
- update blacklist before adding pending task to avoid race condition
- SPARK-21563 context's added jars shouldn't change mid-TaskSet
- [SPARK-24677] Avoid NoSuchElementException from MedianHeap
- SPARK-24755 Executor loss can cause task to not be resubmitted
- SPARK-13343 speculative tasks that didn't commit shouldn't be marked as success
- SPARK-13704 Rack Resolution is done with a batch of de-duped hosts
BlockManagerBasicStrategyReplicationSuite:
- get peers with addition and removal of block managers
- block replication - 2x replication
- block replication - 3x replication
- block replication - mixed between 1x to 5x
- block replication - off-heap
- block replication - 2x replication without peers
- block replication - replication failures
- block replication - addition and deletion of block managers
RDDOperationGraphSuite:
- Test simple cluster equals
ShuffleExternalSorterSuite:
- nested spill should be no-op
ChunkedByteBufferSuite:
- no chunks
- getChunks() duplicates chunks
- copy() does not affect original buffer's position
- writeFully() does not affect original buffer's position
- SPARK-24107: writeFully() write buffer which is larger than bufferWriteChunkSize
- toArray()
- toArray() throws UnsupportedOperationException if size exceeds 2GB
- toInputStream()
HistoryServerDiskManagerSuite:
- leasing space
- tracking active stores
- approximate size heuristic
PythonBroadcastSuite:
- PythonBroadcast can be serialized with Kryo (SPARK-4882)
NettyBlockTransferServiceSuite:
- can bind to a random port
- can bind to two random ports
- can bind to a specific port
- can bind to a specific port twice and the second increments
BasicSchedulerIntegrationSuite:
- super simple job
- multi-stage job
- job with fetch failure
- job failure after 4 attempts
JobWaiterSuite:
- call jobFailed multiple times
RDDBarrierSuite:
- create an RDDBarrier
- create an RDDBarrier in the middle of a chain of RDDs
- RDDBarrier with shuffle
UninterruptibleThreadSuite:
- interrupt when runUninterruptibly is running
- interrupt before runUninterruptibly runs
- nested runUninterruptibly
- stress test
DriverSuite:
- driver should exit after finishing without cleanup (SPARK-530) !!! IGNORED !!!
CompactBufferSuite:
- empty buffer
- basic inserts
- adding sequences
- adding the same buffer to itself
MapStatusSuite:
- compressSize
- decompressSize
- MapStatus should never report non-empty blocks' sizes as 0
- large tasks should use org.apache.spark.scheduler.HighlyCompressedMapStatus
- HighlyCompressedMapStatus: estimated size should be the average non-empty block size
- SPARK-22540: ensure HighlyCompressedMapStatus calculates correct avgSize
- RoaringBitmap: runOptimize succeeded
- RoaringBitmap: runOptimize failed
- Blocks which are bigger than SHUFFLE_ACCURATE_BLOCK_THRESHOLD should not be underestimated.
- SPARK-21133 HighlyCompressedMapStatus#writeExternal throws NPE
BlockInfoManagerSuite:
- initial memory usage
- get non-existent block
- basic lockNewBlockForWriting
- lockNewBlockForWriting blocks while write lock is held, then returns false after release
- lockNewBlockForWriting blocks while write lock is held, then returns true after removal
- read locks are reentrant
- multiple tasks can hold read locks
- single task can hold write lock
- cannot grab a writer lock while already holding a write lock
- assertBlockIsLockedForWriting throws exception if block is not locked
- downgrade lock
- write lock will block readers
- read locks will block writer
- removing a non-existent block throws IllegalArgumentException
- removing a block without holding any locks throws IllegalStateException
- removing a block while holding only a read lock throws IllegalStateException
- removing a block causes blocked callers to receive None
- releaseAllLocksForTask releases write locks
StoragePageSuite:
- rddTable
- empty rddTable
- streamBlockStorageLevelDescriptionAndSize
- receiverBlockTables
- empty receiverBlockTables
TaskSchedulerImplSuite:
- Scheduler does not always schedule tasks on the same workers
- Scheduler correctly accounts for multiple CPUs per task
- Scheduler does not crash when tasks are not serializable
- concurrent attempts for the same stage only have one active taskset
- don't schedule more tasks after a taskset is zombie
- if a zombie attempt finishes, continue scheduling tasks for non-zombie attempts
- tasks are not re-scheduled while executor loss reason is pending
- scheduled tasks obey task and stage blacklists
- scheduled tasks obey node and executor blacklists
- abort stage when all executors are blacklisted and we cannot acquire new executor
- SPARK-22148 abort timer should kick in when task is completely blacklisted & no new executor can be acquired
- SPARK-22148 try to acquire a new executor when task is unschedulable with 1 executor
- SPARK-22148 abort timer should clear unschedulableTaskSetToExpiryTime for all TaskSets
- SPARK-22148 Ensure we don't abort the taskSet if we haven't been completely blacklisted
- Blacklisted node for entire task set prevents per-task blacklist checks: iteration 0
- Blacklisted node for entire task set prevents per-task blacklist checks: iteration 1
- Blacklisted node for entire task set prevents per-task blacklist checks: iteration 2
- Blacklisted node for entire task set prevents per-task blacklist checks: iteration 3
- Blacklisted node for entire task set prevents per-task blacklist checks: iteration 4
- Blacklisted node for entire task set prevents per-task blacklist checks: iteration 5
- Blacklisted node for entire task set prevents per-task blacklist checks: iteration 6
- Blacklisted node for entire task set prevents per-task blacklist checks: iteration 7
- Blacklisted node for entire task set prevents per-task blacklist checks: iteration 8
- Blacklisted node for entire task set prevents per-task blacklist checks: iteration 9
- Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 0
- Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 1
- Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 2
- Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 3
- Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 4
- Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 5
- Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 6
- Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 7
- Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 8
- Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 9
- abort stage if executor loss results in unschedulability from previously failed tasks
- don't abort if there is an executor available, though it hasn't had scheduled tasks yet
- SPARK-16106 locality levels updated if executor added to existing host
- scheduler checks for executors that can be expired from blacklist
- if an executor is lost then the state for its running tasks is cleaned up (SPARK-18553)
- if a task finishes with TaskState.LOST its executor is marked as dead
- Locality should be used for bulk offers even with delay scheduling off
- With delay scheduling off, tasks can be run at any locality level immediately
- TaskScheduler should throw IllegalArgumentException when schedulingMode is not supported
- Completions in zombie tasksets update status of non-zombie taskset
- don't schedule for a barrier taskSet if available slots are less than pending tasks
- schedule tasks for a barrier taskSet if all tasks can be launched together
- cancelTasks shall kill all the running tasks and fail the stage
- killAllTaskAttempts shall kill all the running tasks and not fail the stage
- mark taskset for a barrier stage as zombie in case a task fails
SparkConfSuite:
- Test byteString conversion
- Test timeString conversion
- loading from system properties
- initializing without loading defaults
- named set methods
- basic get and set
- creating SparkContext without master and app name
- creating SparkContext without master
- creating SparkContext without app name
- creating SparkContext with both master and app name
- SparkContext property overriding
- nested property names
- Thread safeness - SPARK-5425
- register kryo classes through registerKryoClasses
- register kryo classes through registerKryoClasses and custom registrator
- register kryo classes through conf
- deprecated configs
- akka deprecated configs
- SPARK-13727
- SPARK-17240: SparkConf should be serializable (java)
- SPARK-17240: SparkConf should be serializable (kryo)
- encryption requires authentication
- spark.network.timeout should bigger than spark.executor.heartbeatInterval
- SPARK-26998: SSL configuration not needed on executors
- SPARK-27244 toDebugString redacts sensitive information
- SPARK-24337: getSizeAsKb with default throws an useful error message with key name
- SPARK-24337: getTimeAsMs throws an useful error message with key name
- SPARK-24337: getTimeAsSeconds throws an useful error message with key name
- SPARK-24337: getTimeAsSeconds with default throws an useful error message with key name
- SPARK-24337: getSizeAsBytes with default long throws an useful error message with key name
- SPARK-24337: getSizeAsMb throws an useful error message with key name
- SPARK-24337: getSizeAsGb throws an useful error message with key name
- SPARK-24337: getSizeAsBytes with default string throws an useful error message with key name
- SPARK-24337: getDouble throws an useful error message with key name
- SPARK-24337: getTimeAsMs with default throws an useful error message with key name
- SPARK-24337: getSizeAsBytes throws an useful error message with key name
- SPARK-24337: getSizeAsGb with default throws an useful error message with key name
- SPARK-24337: getInt throws an useful error message with key name
- SPARK-24337: getSizeAsMb with default throws an useful error message with key name
- SPARK-24337: getSizeAsKb throws an useful error message with key name
- SPARK-24337: getBoolean throws an useful error message with key name
- SPARK-24337: getLong throws an useful error message with key name
ShuffleBlockFetcherIteratorSuite:
- successful 3 local reads + 2 remote reads
- release current unexhausted buffer in case the task completes early
- iterator is all consumed if task completes early
- fail all blocks if any of the remote request fails
- retry corrupt blocks
- big blocks are also checked for corruption
- ensure big blocks available as a concatenated stream can be read
- retry corrupt blocks (disabled)
- Blocks should be shuffled to disk when size of the request is above the threshold(maxReqSizeShuffleToMem).
- fail zero-size blocks
ConfigEntrySuite:
- conf entry: int
- conf entry: long
- conf entry: double
- conf entry: boolean
- conf entry: optional
- conf entry: fallback
- conf entry: time
- conf entry: bytes
- conf entry: regex
- conf entry: string seq
- conf entry: int seq
- conf entry: transformation
- conf entry: checkValue()
- conf entry: valid values check
- conf entry: conversion error
- default value handling is null-safe
- variable expansion of spark config entries
- conf entry : default function
- conf entry: alternative keys
- onCreate
WorkerSuite:
- test isUseLocalNodeSSLConfig
- test maybeUpdateSSLSettings
- test clearing of finishedExecutors (small number of executors)
- test clearing of finishedExecutors (more executors)
- test clearing of finishedDrivers (small number of drivers)
- test clearing of finishedDrivers (more drivers)
- cleanup non-shuffle files after executor exits when config spark.storage.cleanupFilesAfterExecutorExit=true
- don't cleanup non-shuffle files after executor exits when config spark.storage.cleanupFilesAfterExecutorExit=false
- WorkDirCleanup cleans app dirs and shuffle metadata when spark.shuffle.service.db.enabled=true
- WorkdDirCleanup cleans only app dirs whenspark.shuffle.service.db.enabled=false
BlockManagerSuite:
- StorageLevel object caching
- BlockManagerId object caching
- BlockManagerId.isDriver() with DRIVER_IDENTIFIER (SPARK-27090)
- master + 1 manager interaction
- master + 2 managers interaction
- removing block
- removing rdd
- removing broadcast
- reregistration on heart beat
- reregistration on block update
- reregistration doesn't dead lock
- correct BlockResult returned from get() calls
- optimize a location order of blocks without topology information
- optimize a location order of blocks with topology information
- SPARK-9591: getRemoteBytes from another location when Exception throw
- SPARK-14252: getOrElseUpdate should still read from remote storage
- in-memory LRU storage
- in-memory LRU storage with serialization
- in-memory LRU storage with off-heap
- in-memory LRU for partitions of same RDD
- in-memory LRU for partitions of multiple RDDs
- on-disk storage (encryption = off)
- on-disk storage (encryption = on)
- disk and memory storage (encryption = off)
- disk and memory storage (encryption = on)
- disk and memory storage with getLocalBytes (encryption = off)
- disk and memory storage with getLocalBytes (encryption = on)
- disk and memory storage with serialization (encryption = off)
- disk and memory storage with serialization (encryption = on)
- disk and memory storage with serialization and getLocalBytes (encryption = off)
- disk and memory storage with serialization and getLocalBytes (encryption = on)
- disk and off-heap memory storage (encryption = off)
- disk and off-heap memory storage (encryption = on)
- disk and off-heap memory storage with getLocalBytes (encryption = off)
- disk and off-heap memory storage with getLocalBytes (encryption = on)
- LRU with mixed storage levels (encryption = off)
- LRU with mixed storage levels (encryption = on)
- in-memory LRU with streams (encryption = off)
- in-memory LRU with streams (encryption = on)
- LRU with mixed storage levels and streams (encryption = off)
- LRU with mixed storage levels and streams (encryption = on)
- negative byte values in ByteBufferInputStream
- overly large block
- block compression
- block store put failure
- test putBlockDataAsStream with caching (encryption = off)
- test putBlockDataAsStream with caching (encryption = on)
- test putBlockDataAsStream with caching, serialized (encryption = off)
- test putBlockDataAsStream with caching, serialized (encryption = on)
- test putBlockDataAsStream with caching on disk (encryption = off)
- test putBlockDataAsStream with caching on disk (encryption = on)
- turn off updated block statuses
- updated block statuses
- query block statuses
- get matching blocks
- SPARK-1194 regression: fix the same-RDD rule for cache replacement
- safely unroll blocks through putIterator (disk)
- read-locked blocks cannot be evicted from memory
- remove block if a read fails due to missing DiskStore files (SPARK-15736)
- SPARK-13328: refresh block locations (fetch should fail after hitting a threshold)
- SPARK-13328: refresh block locations (fetch should succeed after location refresh)
- SPARK-17484: block status is properly updated following an exception in put()
- SPARK-17484: master block locations are updated following an invalid remote block fetch
- SPARK-20640: Shuffle registration timeout and maxAttempts conf are working
- fetch remote block to local disk if block size is larger than threshold
- query locations of blockIds
PythonRunnerSuite:
- format path
- format paths
CryptoStreamUtilsSuite:
- crypto configuration conversion
- shuffle encryption key length should be 128 by default
- create 256-bit key
- create key with invalid length
- serializer manager integration
- encryption key propagation to executors
- crypto stream wrappers
- error handling wrapper
StatsdSinkSuite:
- metrics StatsD sink with Counter
- metrics StatsD sink with Gauge
- metrics StatsD sink with Histogram
- metrics StatsD sink with Timer
FileCommitProtocolInstantiationSuite:
- Dynamic partitions require appropriate constructor
- Standard partitions work with classic constructor
- Three arg constructors have priority
- Three arg constructors have priority when dynamic
- The protocol must be of the correct class
- If there is no matching constructor, class hierarchy is irrelevant
CompletionIteratorSuite:
- basic test
- reference to sub iterator should not be available after completion
LauncherBackendSuite:
- local: launcher handle
- standalone/client: launcher handle
LogPageSuite:
- get logs simple
UnifiedMemoryManagerSuite:
- single task requesting on-heap execution memory
- two tasks requesting full on-heap execution memory
- two tasks cannot grow past 1 / N of on-heap execution memory
- tasks can block to get at least 1 / 2N of on-heap execution memory
- TaskMemoryManager.cleanUpAllAllocatedMemory
- tasks should not be granted a negative amount of execution memory
- off-heap execution allocations cannot exceed limit
- basic execution memory
- basic storage memory
- execution evicts storage
- execution memory requests smaller than free memory should evict storage (SPARK-12165)
- storage does not evict execution
- small heap
- insufficient executor memory
- execution can evict cached blocks when there are multiple active tasks (SPARK-12155)
- SPARK-15260: atomically resize memory pools
- not enough free memory in the storage pool --OFF_HEAP
UnsafeKryoSerializerSuite:
- SPARK-7392 configuration limits
- basic types
- pairs
- Scala data structures
- Bug: SPARK-10251
- ranges
- asJavaIterable
- custom registrator
- kryo with collect
- kryo with parallelize
- kryo with parallelize for specialized tuples
- kryo with parallelize for primitive arrays
- kryo with collect for specialized tuples
- kryo with SerializableHyperLogLog
- kryo with reduce
- kryo with fold
- kryo with nonexistent custom registrator should fail
- default class loader can be set by a different thread
- registration of HighlyCompressedMapStatus
- serialization buffer overflow reporting
- KryoOutputObjectOutputBridge.writeObject and KryoInputObjectInputBridge.readObject
- getAutoReset
- SPARK-25176 ClassCastException when writing a Map after previously reading a Map with different generic type
- instance reuse with autoReset = true, referenceTracking = true, usePool = true
- instance reuse with autoReset = true, referenceTracking = true, usePool = false
- instance reuse with autoReset = false, referenceTracking = true, usePool = true
- instance reuse with autoReset = false, referenceTracking = true, usePool = false
- instance reuse with autoReset = true, referenceTracking = false, usePool = true
- instance reuse with autoReset = true, referenceTracking = false, usePool = false
- instance reuse with autoReset = false, referenceTracking = false, usePool = true
- instance reuse with autoReset = false, referenceTracking = false, usePool = false
- SPARK-25839 KryoPool implementation works correctly in multi-threaded environment
- SPARK-27216: test RoaringBitmap ser/dser with Kryo
NettyRpcAddressSuite:
- toString
- toString for client mode
BitSetSuite:
- basic set and get
- 100% full bit set
- nextSetBit
- xor len(bitsetX) < len(bitsetY)
- xor len(bitsetX) > len(bitsetY)
- andNot len(bitsetX) < len(bitsetY)
- andNot len(bitsetX) > len(bitsetY)
- [gs]etUntil
AsyncRDDActionsSuite:
- countAsync
- collectAsync
- foreachAsync
- foreachPartitionAsync
- takeAsync
- async success handling
- async failure handling
- FutureAction result, infinite wait
- FutureAction result, finite wait
- FutureAction result, timeout
- SimpleFutureAction callback must not consume a thread while waiting
- ComplexFutureAction callback must not consume a thread while waiting
StagePageSuite:
- ApiHelper.COLUMN_TO_INDEX should match headers of the task table
BarrierStageOnSubmittedSuite:
- submit a barrier ResultStage that contains PartitionPruningRDD
- submit a barrier ShuffleMapStage that contains PartitionPruningRDD
- submit a barrier stage that doesn't contain PartitionPruningRDD
- submit a barrier stage with partial partitions
- submit a barrier stage with union()
- submit a barrier stage with coalesce()
- submit a barrier stage that contains an RDD that depends on multiple barrier RDDs
- submit a barrier stage with zip()
- submit a barrier ResultStage with dynamic resource allocation enabled
- submit a barrier ShuffleMapStage with dynamic resource allocation enabled
- submit a barrier ResultStage that requires more slots than current total under local mode
- submit a barrier ShuffleMapStage that requires more slots than current total under local mode
- submit a barrier ResultStage that requires more slots than current total under local-cluster mode
- submit a barrier ShuffleMapStage that requires more slots than current total under local-cluster mode
HistoryServerArgumentsSuite:
- No Arguments Parsing
- Properties File Arguments Parsing --properties-file
HttpSecurityFilterSuite:
- filter bad user input
- perform access control
- set security-related headers
MetricsSystemSuite:
- MetricsSystem with default config
- MetricsSystem with sources add
- MetricsSystem with Driver instance
- MetricsSystem with Driver instance and spark.app.id is not set
- MetricsSystem with Driver instance and spark.executor.id is not set
- MetricsSystem with Executor instance
- MetricsSystem with Executor instance and spark.app.id is not set
- MetricsSystem with Executor instance and spark.executor.id is not set
- MetricsSystem with instance which is neither Driver nor Executor
- MetricsSystem with Executor instance, with custom namespace
- MetricsSystem with Executor instance, custom namespace which is not set
- MetricsSystem with Executor instance, custom namespace, spark.executor.id not set
- MetricsSystem with non-driver, non-executor instance with custom namespace
JobCancellationSuite:
- local mode, FIFO scheduler
- local mode, fair scheduler
- cluster mode, FIFO scheduler
- cluster mode, fair scheduler
- do not put partially executed partitions into cache
- job group
- inherited job group (SPARK-6629)
- job group with interruption
- task reaper kills JVM if killed tasks keep running for too long
- task reaper will not kill JVM if spark.task.killTimeout == -1
- two jobs sharing the same stage
- interruptible iterator of shuffle reader
PartitioningSuite:
- HashPartitioner equality
- RangePartitioner equality
- RangePartitioner getPartition
- RangePartitioner for keys that are not Comparable (but with Ordering)
- RangPartitioner.sketch
- RangePartitioner.determineBounds
- RangePartitioner should run only one job if data is roughly balanced
- RangePartitioner should work well on unbalanced data
- RangePartitioner should return a single partition for empty RDDs
- HashPartitioner not equal to RangePartitioner
- partitioner preservation
- partitioning Java arrays should fail
- zero-length partitions should be correctly handled
- Number of elements in RDD is less than number of partitions
- defaultPartitioner
- defaultPartitioner when defaultParallelism is set
SecurityManagerSuite:
- set security with conf
- set security with conf for groups
- set security with api
- set security with api for groups
- set security modify acls
- set security modify acls for groups
- set security admin acls
- set security admin acls for groups
- set security with * in acls
- set security with * in acls for groups
- security for groups default behavior
- missing secret authentication key
- secret authentication key
- use executor-specific secret file configuration.
- secret file must be defined in both driver and executor
- master yarn cannot use file mounted secrets
- master local cannot use file mounted secrets
- master local[*] cannot use file mounted secrets
- master local[1,2] cannot use file mounted secrets
- master mesos://localhost:8080 cannot use file mounted secrets
- secret key generation: master 'yarn'
- secret key generation: master 'local'
- secret key generation: master 'local[*]'
- secret key generation: master 'local[1, 2]'
- secret key generation: master 'k8s://127.0.0.1'
- secret key generation: master 'k8s://127.0.1.1'
- secret key generation: master 'local-cluster[2, 1, 1024]'
- secret key generation: master 'invalid'
UISuite:
- basic ui visibility !!! IGNORED !!!
- visibility at localhost:4040 !!! IGNORED !!!
- jetty selects different port under contention
- jetty with https selects different port under contention
- jetty binds to port 0 correctly
- jetty with https binds to port 0 correctly
- verify webUrl contains the scheme
- verify webUrl contains the port
- verify proxy rewrittenURI
- verify rewriting location header for reverse proxy
- add and remove handlers with custom user filter
- http -> https redirect applies to all URIs
- specify both http and https ports separately
SSLOptionsSuite:
- test resolving property file as spark conf 
- test resolving property with defaults specified 
- test whether defaults can be overridden 
- variable substitution
- get password from Hadoop credential provider
SparkListenerWithClusterSuite:
- SparkListener sends executor added message
InputOutputMetricsSuite:
- input metrics for old hadoop with coalesce
- input metrics with cache and coalesce
- input metrics for new Hadoop API with coalesce
- input metrics when reading text file
- input metrics on records read - simple
- input metrics on records read - more stages
- input metrics on records - New Hadoop API
- input metrics on records read with cache
- input read/write and shuffle read/write metrics all line up
- input metrics with interleaved reads
- output metrics on records written
- output metrics on records written - new Hadoop API
- output metrics when writing text file
- input metrics with old CombineFileInputFormat
- input metrics with new CombineFileInputFormat
- input metrics with old Hadoop API in different thread
- input metrics with new Hadoop API in different thread
OutputCommitCoordinatorIntegrationSuite:
- exception thrown in OutputCommitter.commitTask()
StandaloneRestSubmitSuite:
- construct submit request
- create submission
- create submission with multiple masters
- create submission from main method
- kill submission
- request submission status
- create then kill
- create then request status
- create then kill then request status
- kill or request status before create
- good request paths
- good request paths, bad requests
- bad request paths
- server returns unknown fields
- client handles faulty server
- client does not send 'SPARK_ENV_LOADED' env var by default
- client does not send 'SPARK_HOME' env var by default
- client does not send 'SPARK_CONF_DIR' env var by default
- client includes mesos env vars
DriverLoggerSuite:
- driver logs are persisted locally and synced to dfs
OutputCommitCoordinatorSuite:
- Only one of two duplicate commit tasks should commit
- If commit fails, if task is retried it should not be locked, and will succeed.
- Job should not complete if all commits are denied
- Only authorized committer failures can clear the authorized committer lock (SPARK-6614)
- SPARK-19631: Do not allow failed attempts to be authorized for committing
- SPARK-24589: Differentiate tasks from different stage attempts
- SPARK-24589: Make sure stage state is cleaned up
SortShuffleSuite:
- groupByKey without compression
- shuffle non-zero block size
- shuffle serializer
- zero sized blocks
- zero sized blocks without kryo
- shuffle on mutable pairs
- sorting on mutable pairs
- cogroup using mutable pairs
- subtract mutable pairs
- sort with Java non serializable class - Kryo
- sort with Java non serializable class - Java
- shuffle with different compression settings (SPARK-3426)
- [SPARK-4085] rerun map stage if reduce stage cannot find its local shuffle file
- cannot find its local shuffle file if no execution of the stage and rerun shuffle
- metrics for shuffle without aggregation
- metrics for shuffle with aggregation
- multiple simultaneous attempts for one task (SPARK-8029)
- SortShuffleManager properly cleans up files for shuffles that use the serialized path
- SortShuffleManager properly cleans up files for shuffles that use the deserialized path
SumEvaluatorSuite:
- correct handling of count 1
- correct handling of count 0
- correct handling of NaN
- correct handling of > 1 values
- test count > 1
MapOutputTrackerSuite:
- master start and stop
- master register shuffle and fetch
- master register and unregister shuffle
- master register shuffle and unregister map output and fetch
- remote fetch
- remote fetch below max RPC message size
- min broadcast size exceeds max RPC message size
- getLocationsWithLargestOutputs with multiple outputs in same machine
- remote fetch using broadcast
- equally divide map statistics tasks
- zero-sized blocks should be excluded when getMapSizesByExecutorId
HadoopFSDelegationTokenProviderSuite:
- hadoopFSsToAccess should return defaultFS even if not configured
- SPARK-24149: retrieve all namenodes from HDFS
WholeTextFileInputFormatSuite:
- for small files minimum split size per node and per rack should be less than or equal to maximum split size.
BlockManagerProactiveReplicationSuite:
- get peers with addition and removal of block managers
- block replication - 2x replication
- block replication - 3x replication
- block replication - mixed between 1x to 5x
- block replication - off-heap
- block replication - 2x replication without peers
- block replication - replication failures
- block replication - addition and deletion of block managers
- proactive block replication - 2 replicas - 1 block manager deletions
- proactive block replication - 3 replicas - 2 block manager deletions
- proactive block replication - 4 replicas - 3 block manager deletions
- proactive block replication - 5 replicas - 4 block manager deletions
SparkListenerSuite:
- don't call sc.stop in listener
- basic creation and shutdown of LiveListenerBus
- bus.stop() waits for the event queue to completely drain
- metrics for dropped listener events
- basic creation of StageInfo
- basic creation of StageInfo with shuffle
- StageInfo with fewer tasks than partitions
- local metrics
- onTaskGettingResult() called when result fetched remotely
- onTaskGettingResult() not called when result sent directly
- onTaskEnd() should be called for all started tasks, even after job has been killed
- SparkListener moves on if a listener throws an exception
- registering listeners via spark.extraListeners
- add and remove listeners to/from LiveListenerBus queues
- interrupt within listener is handled correctly: throw interrupt
- interrupt within listener is handled correctly: set Thread interrupted
VersionUtilsSuite:
- Parse Spark major version
- Parse Spark minor version
- Parse Spark major and minor versions
- Return short version number
SizeTrackerSuite:
- vector fixed size insertions
- vector variable size insertions
- map fixed size insertions
- map variable size insertions
- map updates
SortShuffleManagerSuite:
- supported shuffle dependencies for serialized shuffle
- unsupported shuffle dependencies for serialized shuffle
KryoSerializerAutoResetDisabledSuite:
- sort-shuffle with bypassMergeSort (SPARK-7873)
- calling deserialize() after deserializeStream()
- SPARK-25786: ByteBuffer.array -- UnsupportedOperationException
CompressionCodecSuite:
- default compression codec
- lz4 compression codec
- lz4 compression codec short form
- lz4 supports concatenation of serialized streams
- lzf compression codec
- lzf compression codec short form
- lzf supports concatenation of serialized streams
- snappy compression codec
- snappy compression codec short form
- snappy supports concatenation of serialized streams
- zstd compression codec
- zstd compression codec short form
- zstd supports concatenation of serialized zstd
- bad compression codec
ChunkedByteBufferFileRegionSuite:
- transferTo can stop and resume correctly
- transfer to with random limits
XORShiftRandomSuite:
- XORShift generates valid random numbers
- XORShift with zero seed
- hashSeed has random bits throughout
CoarseGrainedSchedulerBackendSuite:
- serialized task larger than max RPC message size
- compute max number of concurrent tasks can be launched
- compute max number of concurrent tasks can be launched when spark.task.cpus > 1
- compute max number of concurrent tasks can be launched when some executors are busy
- custom log url for Spark UI is applied
AppendOnlyMapSuite:
- initialization
- object keys and values
- primitive keys and values
- null keys
- null values
- changeValue
- inserting in capacity-1 map
- destructive sort
ConfigReaderSuite:
- variable expansion
- circular references
- spark conf provider filters config keys
ThreadUtilsSuite:
- newDaemonSingleThreadExecutor
- newDaemonSingleThreadScheduledExecutor
- newDaemonCachedThreadPool
- sameThread
- runInNewThread
- parmap should be interruptible
SocketAuthHelperSuite:
- successful auth
- failed auth
RDDOperationScopeSuite:
- equals and hashCode
- getAllScopes
- json de/serialization
- withScope
- withScope with partial nesting
- withScope with multiple layers of nesting
KryoSerializerDistributedSuite:
- kryo objects are serialised consistently in different processes
OpenHashMapSuite:
- size for specialized, primitive value (int)
- initialization
- primitive value
- non-primitive value
- null keys
- null values
- changeValue
- inserting in capacity-1 map
- contains
- distinguish between the 0/0.0/0L and null
OpenHashSetSuite:
- size for specialized, primitive int
- primitive int
- primitive long
- primitive float
- primitive double
- non-primitive
- non-primitive set growth
- primitive set growth
- SPARK-18200 Support zero as an initial set size
- support for more than 12M items
AccumulatorSuite:
- accumulator serialization
- get accum
SparkContextInfoSuite:
- getPersistentRDDs only returns RDDs that are marked as cached
- getPersistentRDDs returns an immutable map
- getRDDStorageInfo only reports on RDDs that actually persist data
- call sites report correct locations
ExecutorAllocationManagerSuite:
- verify min/max executors
- starting state
- add executors
- executionAllocationRatio is correctly handled
- add executors capped by num pending tasks
- add executors when speculative tasks added
- ignore task end events from completed stages
- cancel pending executors when no longer needed
- remove executors
- remove multiple executors
- Removing with various numExecutorsTarget condition
- interleaving add and remove
- starting/canceling add timer
- starting/canceling remove timers
- mock polling loop with no events
- mock polling loop add behavior
- mock polling loop remove behavior
- listeners trigger add executors correctly
- listeners trigger remove executors correctly
- listeners trigger add and remove executor callbacks correctly
- SPARK-4951: call onTaskStart before onExecutorAdded
- SPARK-4951: onExecutorAdded should not add a busy executor to removeTimes
- avoid ramp up when target < running executors
- avoid ramp down initial executors until first job is submitted
- avoid ramp down initial executors until idle executor is timeout
- get pending task number and related locality preference
- SPARK-8366: maxNumExecutorsNeeded should properly handle failed tasks
- reset the state of allocation manager
- SPARK-23365 Don't update target num executors when killing idle executors
- SPARK-26758 check executor target number after idle time out 
- SPARK-26927 call onExecutorRemoved before onTaskStart
MemoryStoreSuite:
- reserve/release unroll memory
- safely unroll blocks
- safely unroll blocks through putIteratorAsValues
- safely unroll blocks through putIteratorAsBytes
- PartiallySerializedBlock.valuesIterator
- PartiallySerializedBlock.finishWritingToStream
- multiple unrolls by the same thread
- lazily create a big ByteBuffer to avoid OOM if it cannot be put into MemoryStore
- put a small ByteBuffer to MemoryStore
- SPARK-22083: Release all locks in evictBlocksToFreeSpace
SparkSubmitSuite:
- prints usage on empty input
- prints usage with only --help
- prints error with unrecognized options
- handle binary specified but not class
- handles arguments with --key=val
- handles arguments to user program
- handles arguments to user program with name collision
- print the right queue name
- SPARK-24241: do not fail fast if executor num is 0 when dynamic allocation is enabled
- specify deploy mode through configuration
- handles YARN cluster mode
- handles YARN client mode
- handles standalone cluster mode
- handles legacy standalone cluster mode
- handles standalone client mode
- handles mesos client mode
- handles k8s cluster mode
- handles confs with flag equivalents
- SPARK-21568 ConsoleProgressBar should be enabled only in shells
- launch simple application with spark-submit
- launch simple application with spark-submit with redaction
- includes jars passed in through --jars
- includes jars passed in through --packages
- includes jars passed through spark.jars.packages and spark.jars.repositories
- correctly builds R packages included in a jar with --packages !!! IGNORED !!!
- include an external JAR in SparkR !!! CANCELED !!!
  org.apache.spark.api.r.RUtils.isSparkRInstalled was false SparkR is not installed in this build. (SparkSubmitSuite.scala:630)
- resolves command line argument paths correctly
- ambiguous archive mapping results in error message
- resolves config paths correctly
- user classpath first in driver
- SPARK_CONF_DIR overrides spark-defaults.conf
- support glob path
- downloadFile - invalid url
- downloadFile - file doesn't exist
- downloadFile does not download local file
- download one file to local
- download list of files to local
- remove copies of application jar from classpath
- Avoid re-upload remote resources in yarn client mode
- download remote resource if it is not supported by yarn service
- avoid downloading remote resource if it is supported by yarn service
- force download from blacklisted schemes
- force download for all the schemes
- start SparkApplication without modifying system properties
- support --py-files/spark.submit.pyFiles in non pyspark application
- handles natural line delimiters in --properties-file and --conf uniformly
- get a Spark configuration from arguments
RPackageUtilsSuite:
- pick which jars to unpack using the manifest
- build an R package from a jar end to end
- jars that don't exist are skipped and print warning
- faulty R package shows documentation
- jars without manifest return false
- SparkR zipping works properly
TaskDescriptionSuite:
- encoding and then decoding a TaskDescription results in the same TaskDescription
MeanEvaluatorSuite:
- test count 0
- test count 1
- test count > 1
TopologyMapperSuite:
- File based Topology Mapper
ShuffleNettySuite:
- groupByKey without compression
- shuffle non-zero block size
- shuffle serializer
- zero sized blocks
- zero sized blocks without kryo
- shuffle on mutable pairs
- sorting on mutable pairs
- cogroup using mutable pairs
- subtract mutable pairs
- sort with Java non serializable class - Kryo
- sort with Java non serializable class - Java
- shuffle with different compression settings (SPARK-3426)
- [SPARK-4085] rerun map stage if reduce stage cannot find its local shuffle file
- cannot find its local shuffle file if no execution of the stage and rerun shuffle
- metrics for shuffle without aggregation
- metrics for shuffle with aggregation
- multiple simultaneous attempts for one task (SPARK-8029)
CountEvaluatorSuite:
- test count 0
- test count >= 1
KryoSerializerSuite:
- SPARK-7392 configuration limits
- basic types
- pairs
- Scala data structures
- Bug: SPARK-10251
- ranges
- asJavaIterable
- custom registrator
- kryo with collect
- kryo with parallelize
- kryo with parallelize for specialized tuples
- kryo with parallelize for primitive arrays
- kryo with collect for specialized tuples
- kryo with SerializableHyperLogLog
- kryo with reduce
- kryo with fold
- kryo with nonexistent custom registrator should fail
- default class loader can be set by a different thread
- registration of HighlyCompressedMapStatus
- serialization buffer overflow reporting
- KryoOutputObjectOutputBridge.writeObject and KryoInputObjectInputBridge.readObject
- getAutoReset
- SPARK-25176 ClassCastException when writing a Map after previously reading a Map with different generic type
- instance reuse with autoReset = true, referenceTracking = true, usePool = true
- instance reuse with autoReset = true, referenceTracking = true, usePool = false
- instance reuse with autoReset = false, referenceTracking = true, usePool = true
- instance reuse with autoReset = false, referenceTracking = true, usePool = false
- instance reuse with autoReset = true, referenceTracking = false, usePool = true
- instance reuse with autoReset = true, referenceTracking = false, usePool = false
- instance reuse with autoReset = false, referenceTracking = false, usePool = true
- instance reuse with autoReset = false, referenceTracking = false, usePool = false
- SPARK-25839 KryoPool implementation works correctly in multi-threaded environment
- SPARK-27216: test RoaringBitmap ser/dser with Kryo
BlacklistTrackerSuite:
- executors can be blacklisted with only a few failures per stage
- executors aren't blacklisted as a result of tasks in failed task sets
- stage blacklist updates correctly on stage success
- stage blacklist updates correctly on stage failure
- blacklisted executors and nodes get recovered with time
- blacklist can handle lost executors
- task failures expire with time
- task failure timeout works as expected for long-running tasksets
- only blacklist nodes for the application when enough executors have failed on that specific host
- blacklist still respects legacy configs
- check blacklist configuration invariants
- blacklisting kills executors, configured by BLACKLIST_KILL_ENABLED
- fetch failure blacklisting kills executors, configured by BLACKLIST_KILL_ENABLED
FailureSuite:
- failure in a single-stage job
- failure in a two-stage job
- failure in a map stage
- failure because task results are not serializable
- failure because task closure is not serializable
- managed memory leak error should not mask other failures (SPARK-9266
- last failure cause is sent back to driver
- failure cause stacktrace is sent back to driver if exception is not serializable
- failure cause stacktrace is sent back to driver if exception is not deserializable
- failure in tasks in a submitMapStage
- failure because cached RDD partitions are missing from DiskStore (SPARK-15736)
- SPARK-16304: Link error should not crash executor
PartitionwiseSampledRDDSuite:
- seed distribution
- concurrency
JdbcRDDSuite:
- basic functionality
- large id overflow
FileSuite:
- text files
- text files (compressed)
- text files do not allow null rows
- SequenceFiles
- SequenceFile (compressed)
- SequenceFile with writable key
- SequenceFile with writable value
- SequenceFile with writable key and value
- implicit conversions in reading SequenceFiles
- object files of ints
- object files of complex types
- object files of classes from a JAR
- write SequenceFile using new Hadoop API
- read SequenceFile using new Hadoop API
- binary file input as byte array
- portabledatastream caching tests
- portabledatastream persist disk storage
- portabledatastream flatmap tests
- SPARK-22357 test binaryFiles minPartitions
- minimum split size per node and per rack should be less than or equal to maxSplitSize
- fixed record length binary file as byte array
- negative binary record length should raise an exception
- file caching
- prevent user from overwriting the empty directory (old Hadoop API)
- prevent user from overwriting the non-empty directory (old Hadoop API)
- allow user to disable the output directory existence checking (old Hadoop API)
- prevent user from overwriting the empty directory (new Hadoop API)
- prevent user from overwriting the non-empty directory (new Hadoop API)
- allow user to disable the output directory existence checking (new Hadoop API
- save Hadoop Dataset through old Hadoop API
- save Hadoop Dataset through new Hadoop API
- Get input files via old Hadoop API
- Get input files via new Hadoop API
- spark.files.ignoreCorruptFiles should work both HadoopRDD and NewHadoopRDD
- spark.hadoopRDD.ignoreEmptySplits work correctly (old Hadoop API)
- spark.hadoopRDD.ignoreEmptySplits work correctly (new Hadoop API)
- spark.files.ignoreMissingFiles should work both HadoopRDD and NewHadoopRDD
SparkContextSuite:
- Only one SparkContext may be active at a time
- Can still construct a new SparkContext after failing to construct a previous one
- Test getOrCreate
- BytesWritable implicit conversion is correct
- basic case for addFile and listFiles
- add and list jar files
- SPARK-17650: malformed url's throw exceptions before bricking Executors
- addFile recursive works
- addFile recursive can't add directories by default
- cannot call addFile with different paths that have the same filename
- addJar can be called twice with same file in local-mode (SPARK-16787)
- addFile can be called twice with same file in local-mode (SPARK-16787)
- addJar can be called twice with same file in non-local-mode (SPARK-16787)
- addFile can be called twice with same file in non-local-mode (SPARK-16787)
- add jar with invalid path
- SPARK-22585 addJar argument without scheme is interpreted literally without url decoding
- Cancelling job group should not cause SparkContext to shutdown (SPARK-6414)
- Comma separated paths for newAPIHadoopFile/wholeTextFiles/binaryFiles (SPARK-7155)
- Default path for file based RDDs is properly set (SPARK-12517)
- calling multiple sc.stop() must not throw any exception
- No exception when both num-executors and dynamic allocation set.
- localProperties are inherited by spawned threads.
- localProperties do not cross-talk between threads.
- log level case-insensitive and reset log level
- register and deregister Spark listener from SparkContext
- Cancelling stages/jobs with custom reasons.
- client mode with a k8s master url
- Killing tasks that raise interrupted exception on cancel
- Killing tasks that raise runtime exception on cancel
java.lang.Throwable
	at org.apache.spark.DebugFilesystem$.addOpenStream(DebugFilesystem.scala:36)
	at org.apache.spark.DebugFilesystem.open(DebugFilesystem.scala:70)
	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:769)
	at org.apache.spark.SparkContextSuite.$anonfun$new$59(SparkContextSuite.scala:611)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:105)
	at org.scalatest.FunSuiteLike.invokeWithFixture$1(FunSuiteLike.scala:184)
	at org.scalatest.FunSuiteLike.$anonfun$runTest$1(FunSuiteLike.scala:196)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:289)
	at org.scalatest.FunSuiteLike.runTest(FunSuiteLike.scala:196)
- SPARK-19446: DebugFilesystem.assertNoOpenStreams should report open streams to help debugging
	at org.scalatest.FunSuiteLike.runTest$(FunSuiteLike.scala:178)
	at org.apache.spark.SparkContextSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkContextSuite.scala:43)
	at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:221)
	at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:214)
	at org.apache.spark.SparkContextSuite.runTest(SparkContextSuite.scala:43)
	at org.scalatest.FunSuiteLike.$anonfun$runTests$1(FunSuiteLike.scala:229)
	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:396)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:384)
	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:379)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:461)
	at org.scalatest.FunSuiteLike.runTests(FunSuiteLike.scala:229)
	at org.scalatest.FunSuiteLike.runTests$(FunSuiteLike.scala:228)
	at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
	at org.scalatest.Suite.run(Suite.scala:1147)
	at org.scalatest.Suite.run$(Suite.scala:1129)
	at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
	at org.scalatest.FunSuiteLike.$anonfun$run$1(FunSuiteLike.scala:233)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:521)
	at org.scalatest.FunSuiteLike.run(FunSuiteLike.scala:233)
	at org.scalatest.FunSuiteLike.run$(FunSuiteLike.scala:232)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:54)
	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:54)
	at org.scalatest.Suite.callExecuteOnSuite$1(Suite.scala:1210)
	at org.scalatest.Suite.$anonfun$runNestedSuites$1(Suite.scala:1257)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at org.scalatest.Suite.runNestedSuites(Suite.scala:1255)
	at org.scalatest.Suite.runNestedSuites$(Suite.scala:1189)
	at org.scalatest.tools.DiscoverySuite.runNestedSuites(DiscoverySuite.scala:30)
	at org.scalatest.Suite.run(Suite.scala:1144)
	at org.scalatest.Suite.run$(Suite.scala:1129)
	at org.scalatest.tools.DiscoverySuite.run(DiscoverySuite.scala:30)
	at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:45)
	at org.scalatest.tools.Runner$.$anonfun$doRunRunRunDaDoRunRun$13(Runner.scala:1346)
	at org.scalatest.tools.Runner$.$anonfun$doRunRunRunDaDoRunRun$13$adapted(Runner.scala:1340)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.tools.Runner$.doRunRunRunDaDoRunRun(Runner.scala:1340)
	at org.scalatest.tools.Runner$.$anonfun$runOptionallyWithPassFailReporter$24(Runner.scala:1031)
	at org.scalatest.tools.Runner$.$anonfun$runOptionallyWithPassFailReporter$24$adapted(Runner.scala:1010)
	at org.scalatest.tools.Runner$.withClassLoaderAndDispatchReporter(Runner.scala:1506)
	at org.scalatest.tools.Runner$.runOptionallyWithPassFailReporter(Runner.scala:1010)
	at org.scalatest.tools.Runner$.main(Runner.scala:827)
	at org.scalatest.tools.Runner.main(Runner.scala)
- support barrier execution mode under local mode
- support barrier execution mode under local-cluster mode
- cancel zombie tasks in a result stage when the job finishes
- Avoid setting spark.task.cpus unreasonably (SPARK-27192)
DiskBlockObjectWriterSuite:
- verify write metrics
- verify write metrics on revert
- Reopening a closed block writer
- calling revertPartialWritesAndClose() on a partial write should truncate up to commit
- calling revertPartialWritesAndClose() after commit() should have no effect
- calling revertPartialWritesAndClose() on a closed block writer should have no effect
- commit() and close() should be idempotent
- revertPartialWritesAndClose() should be idempotent
- commit() and close() without ever opening or writing
ThreadingSuite:
- accessing SparkContext form a different thread
- accessing SparkContext form multiple threads
- accessing multi-threaded SparkContext form multiple threads
- parallel job execution
- set local properties in different thread
- set and get local properties in parent-children thread
- mutation in parent local property does not affect child (SPARK-10563)
PythonRDDSuite:
- Writing large strings to the worker
- Handle nulls gracefully
- python server error handling
ShuffleDependencySuite:
- key, value, and combiner classes correct in shuffle dependency without aggregation
- key, value, and combiner classes available in shuffle dependency with aggregation
- combineByKey null combiner class tag handled correctly
JVMObjectTrackerSuite:
- JVMObjectId does not take null IDs
- JVMObjectTracker
ClosureCleanerSuite2:
- clean basic serializable closures
- clean basic non-serializable closures
- clean basic nested serializable closures
- clean basic nested non-serializable closures
- clean complicated nested serializable closures
- clean complicated nested non-serializable closures
PartitionPruningRDDSuite:
- Pruned Partitions inherit locality prefs correctly
- Pruned Partitions can be unioned 
SimpleDateParamSuite:
- date parsing
StorageSuite:
- storage status add non-RDD blocks
- storage status add RDD blocks
- storage status getBlock
- storage status memUsed, diskUsed, externalBlockStoreUsed
- storage memUsed, diskUsed with on-heap and off-heap blocks
- old SparkListenerBlockManagerAdded event compatible
CausedBySuite:
- For an error without a cause, should return the error
- For an error with a cause, should return the cause of the error
- For an error with a cause that itself has a cause, return the root cause
JavaUtilsSuite:
- containsKey implementation without iteratively entrySet call
FileAppenderSuite:
- basic file appender
- rolling file appender - time-based rolling
- rolling file appender - time-based rolling (compressed)
- rolling file appender - size-based rolling
- rolling file appender - size-based rolling (compressed)
- rolling file appender - cleaning
- file appender selection
- file appender async close stream abruptly
- file appender async close stream gracefully
BypassMergeSortShuffleWriterSuite:
- write empty iterator
- write with some empty partitions
- only generate temp shuffle file for non-empty partition
- cleanup of intermediate files after errors
DistributedSuite:
- task throws not serializable exception
- local-cluster format
- simple groupByKey
- groupByKey where map output sizes exceed maxMbInFlight
- accumulators
- broadcast variables
- repeatedly failing task
- repeatedly failing task that crashes JVM
- repeatedly failing task that crashes JVM with a zero exit code (SPARK-16925)
- caching (encryption = off)
- caching (encryption = on)
- caching on disk (encryption = off)
- caching on disk (encryption = on)
- caching in memory, replicated (encryption = off)
- caching in memory, replicated (encryption = off) (with replication as stream)
- caching in memory, replicated (encryption = on)
- caching in memory, replicated (encryption = on) (with replication as stream)
- caching in memory, serialized, replicated (encryption = off)
- caching in memory, serialized, replicated (encryption = off) (with replication as stream)
- caching in memory, serialized, replicated (encryption = on)
- caching in memory, serialized, replicated (encryption = on) (with replication as stream)
- caching on disk, replicated (encryption = off)
- caching on disk, replicated (encryption = off) (with replication as stream)
- caching on disk, replicated (encryption = on)
- caching on disk, replicated (encryption = on) (with replication as stream)
- caching in memory and disk, replicated (encryption = off)
- caching in memory and disk, replicated (encryption = off) (with replication as stream)
- caching in memory and disk, replicated (encryption = on)
- caching in memory and disk, replicated (encryption = on) (with replication as stream)
- caching in memory and disk, serialized, replicated (encryption = off)
- caching in memory and disk, serialized, replicated (encryption = off) (with replication as stream)
- caching in memory and disk, serialized, replicated (encryption = on)
- caching in memory and disk, serialized, replicated (encryption = on) (with replication as stream)
- compute without caching when no partitions fit in memory
- compute when only some partitions fit in memory
- passing environment variables to cluster
- recover from node failures
- recover from repeated node failures during shuffle-map
- recover from repeated node failures during shuffle-reduce
- recover from node failures with replication
- unpersist RDDs
FutureActionSuite:
- simple async action
- complex async action
LocalCheckpointSuite:
- transform storage level
- basic lineage truncation
- basic lineage truncation - caching before checkpointing
- basic lineage truncation - caching after checkpointing
- indirect lineage truncation
- indirect lineage truncation - caching before checkpointing
- indirect lineage truncation - caching after checkpointing
- checkpoint without draining iterator
- checkpoint without draining iterator - caching before checkpointing
- checkpoint without draining iterator - caching after checkpointing
- checkpoint blocks exist
- checkpoint blocks exist - caching before checkpointing
- checkpoint blocks exist - caching after checkpointing
- missing checkpoint block fails with informative message
WorkerWatcherSuite:
- WorkerWatcher shuts down on valid disassociation
- WorkerWatcher stays alive on invalid disassociation
ExternalShuffleServiceDbSuite:
- Recover shuffle data with spark.shuffle.service.db.enabled=true after shuffle service restart
- Can't recover shuffle data with spark.shuffle.service.db.enabled=false after shuffle service restart
NettyRpcEnvSuite:
- send a message locally
- send a message remotely
- send a RpcEndpointRef
- ask a message locally
- ask a message remotely
- ask a message timeout
- onStart and onStop
- onError: error in onStart
- onError: error in onStop
- onError: error in receive
- self: call in onStart
- self: call in receive
- self: call in onStop
- call receive in sequence
- stop(RpcEndpointRef) reentrant
- sendWithReply
- sendWithReply: remotely
- sendWithReply: error
- sendWithReply: remotely error
- network events in sever RpcEnv when another RpcEnv is in server mode
- network events in sever RpcEnv when another RpcEnv is in client mode
- network events in client RpcEnv when another RpcEnv is in server mode
- sendWithReply: unserializable error
- port conflict
- send with authentication
- send with SASL encryption
- send with AES encryption
- ask with authentication
- ask with SASL encryption
- ask with AES encryption
- construct RpcTimeout with conf property
- ask a message timeout on Future using RpcTimeout
- file server
- SPARK-14699: RpcEnv.shutdown should not fire onDisconnected events
- non-existent endpoint
- advertise address different from bind address
- RequestMessage serialization
PagedTableSuite:
- pageNavigation
ClientSuite:
- correctly validates driver jar URL's
BlockIdSuite:
- test-bad-deserialization
- rdd
- shuffle
- shuffle data
- shuffle index
- broadcast
- taskresult
- stream
- temp local
- temp shuffle
- test
PartiallyUnrolledIteratorSuite:
- join two iterators
KryoSerializerResizableOutputSuite:
- kryo without resizable output buffer should fail on large array
- kryo with resizable output buffer should succeed on large array
BlockManagerReplicationSuite:
- get peers with addition and removal of block managers
- block replication - 2x replication
- block replication - 3x replication
- block replication - mixed between 1x to 5x
- block replication - off-heap
- block replication - 2x replication without peers
- block replication - replication failures
- block replication - addition and deletion of block managers
BarrierTaskContextSuite:
- global sync by barrier() call
- support multiple barrier() call within a single task
- throw exception on barrier() call timeout
- throw exception if barrier() call doesn't happen on every task
- throw exception if the number of barrier() calls are not the same on every task
BlockStoreShuffleReaderSuite:
- read() releases resources on completion
WholeTextFileRecordReaderSuite:
- Correctness of WholeTextFileRecordReader.
- Correctness of WholeTextFileRecordReader with GzipCodec.
SubmitRestProtocolSuite:
- validate
- request to and from JSON
- response to and from JSON
- CreateSubmissionRequest
- CreateSubmissionResponse
- KillSubmissionResponse
- SubmissionStatusResponse
- ErrorResponse
FlatmapIteratorSuite:
- Flatmap Iterator to Disk
- Flatmap Iterator to Memory
- Serializer Reset
SizeEstimatorSuite:
- simple classes
- primitive wrapper objects
- class field blocks rounding
- strings
- primitive arrays
- object arrays
- 32-bit arch
- 64-bit arch with no compressed oops
- class field blocks rounding on 64-bit VM without useCompressedOops
- check 64-bit detection for s390x arch
- SizeEstimation can provide the estimated size
ElementTrackingStoreSuite:
- tracking for multiple types
PipedRDDSuite:
- basic pipe
- basic pipe with tokenization
- failure in iterating over pipe input
- stdin writer thread should be exited when task is finished
- advanced pipe
- pipe with empty partition
- pipe with env variable
- pipe with process which cannot be launched due to bad command
cat: nonexistent_file: No such file or directory
cat: nonexistent_file: No such file or directory
- pipe with process which is launched but fails with non-zero exit status
- basic pipe with separate working directory
- test pipe exports map_input_file
- test pipe exports mapreduce_map_input_file
AccumulatorV2Suite:
- LongAccumulator add/avg/sum/count/isZero
- DoubleAccumulator add/avg/sum/count/isZero
- ListAccumulator
InboxSuite:
- post
- post: with reply
- post: multiple threads
- post: Associated
- post: Disassociated
- post: AssociationError
MasterWebUISuite:
- kill application
- kill driver
RadixSortSuite:
- radix support for unsigned binary data asc nulls first
- sort unsigned binary data asc nulls first
- sort key prefix unsigned binary data asc nulls first
- fuzz test unsigned binary data asc nulls first with random bitmasks
- fuzz test key prefix unsigned binary data asc nulls first with random bitmasks
- radix support for unsigned binary data asc nulls last
- sort unsigned binary data asc nulls last
- sort key prefix unsigned binary data asc nulls last
- fuzz test unsigned binary data asc nulls last with random bitmasks
- fuzz test key prefix unsigned binary data asc nulls last with random bitmasks
- radix support for unsigned binary data desc nulls last
- sort unsigned binary data desc nulls last
- sort key prefix unsigned binary data desc nulls last
- fuzz test unsigned binary data desc nulls last with random bitmasks
- fuzz test key prefix unsigned binary data desc nulls last with random bitmasks
- radix support for unsigned binary data desc nulls first
- sort unsigned binary data desc nulls first
- sort key prefix unsigned binary data desc nulls first
- fuzz test unsigned binary data desc nulls first with random bitmasks
- fuzz test key prefix unsigned binary data desc nulls first with random bitmasks
- radix support for twos complement asc nulls first
- sort twos complement asc nulls first
- sort key prefix twos complement asc nulls first
- fuzz test twos complement asc nulls first with random bitmasks
- fuzz test key prefix twos complement asc nulls first with random bitmasks
- radix support for twos complement asc nulls last
- sort twos complement asc nulls last
- sort key prefix twos complement asc nulls last
- fuzz test twos complement asc nulls last with random bitmasks
- fuzz test key prefix twos complement asc nulls last with random bitmasks
- radix support for twos complement desc nulls last
- sort twos complement desc nulls last
- sort key prefix twos complement desc nulls last
- fuzz test twos complement desc nulls last with random bitmasks
- fuzz test key prefix twos complement desc nulls last with random bitmasks
- radix support for twos complement desc nulls first
- sort twos complement desc nulls first
- sort key prefix twos complement desc nulls first
- fuzz test twos complement desc nulls first with random bitmasks
- fuzz test key prefix twos complement desc nulls first with random bitmasks
- radix support for binary data partial
- sort binary data partial
- sort key prefix binary data partial
- fuzz test binary data partial with random bitmasks
- fuzz test key prefix binary data partial with random bitmasks
DiskBlockManagerSuite:
- basic block creation
- enumerating blocks
- SPARK-22227: non-block files are skipped
WorkerArgumentsTest:
- Memory can't be set to 0 when cmd line args leave off M or G
- Memory can't be set to 0 when SPARK_WORKER_MEMORY env property leaves off M or G
- Memory correctly set when SPARK_WORKER_MEMORY env property appends G
- Memory correctly set from args with M appended to memory value
StatusTrackerSuite:
- basic status API usage
- getJobIdsForGroup()
- getJobIdsForGroup() with takeAsync()
- getJobIdsForGroup() with takeAsync() across multiple partitions
PrimitiveKeyOpenHashMapSuite:
- size for specialized, primitive key, value (int, int)
- initialization
- basic operations
- null values
- changeValue
- inserting in capacity-1 map
- contains
ApplicationCacheSuite:
- Completed UI get
- Test that if an attempt ID is set, it must be used in lookups
- Incomplete apps refreshed
- Large Scale Application Eviction
- Attempts are Evicted
- redirect includes query params
StandaloneDynamicAllocationSuite:
- dynamic allocation default behavior
- dynamic allocation with max cores <= cores per worker
- dynamic allocation with max cores > cores per worker
- dynamic allocation with cores per executor
- dynamic allocation with cores per executor AND max cores
- kill the same executor twice (SPARK-9795)
- the pending replacement executors should not be lost (SPARK-10515)
- disable force kill for busy executors (SPARK-9552)
- initial executor limit
- kill all executors on localhost
- executor registration on a blacklisted host must fail
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@36600e1c rejected from java.util.concurrent.ThreadPoolExecutor@19f7027e[Shutting down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 19]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at java.util.concurrent.Executors$DelegatedExecutorService.execute(Executors.java:668)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$4.execute(ExecutionContextImpl.scala:138)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:288)
	at scala.concurrent.Promise.complete(Promise.scala:53)
	at scala.concurrent.Promise.complete$(Promise.scala:52)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:187)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at scala.concurrent.BatchingExecutor$Batch.processBatch$1(BatchingExecutor.scala:67)
	at scala.concurrent.BatchingExecutor$Batch.$anonfun$run$1(BatchingExecutor.scala:82)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:85)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:59)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:874)
	at scala.concurrent.BatchingExecutor.execute(BatchingExecutor.scala:110)
	at scala.concurrent.BatchingExecutor.execute$(BatchingExecutor.scala:107)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:872)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:288)
	at scala.concurrent.Promise.complete(Promise.scala:53)
	at scala.concurrent.Promise.complete$(Promise.scala:52)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:187)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
ExternalClusterManagerSuite:
- launch of backend and scheduler
LogUrlsStandaloneSuite:
- verify that correct log urls get propagated from workers
- verify that log urls reflect SPARK_PUBLIC_DNS (SPARK-6175)
AppClientSuite:
- interface methods of AppClient using local Master
- request from AppClient before initialized with master
InternalAccumulatorSuite:
- internal accumulators in TaskContext
- internal accumulators in a stage
- internal accumulators in multiple stages
- internal accumulators in resubmitted stages
- internal accumulators are registered for cleanups
JsonProtocolSuite:
- SparkListenerEvent
- Dependent Classes
- ExceptionFailure backward compatibility: full stack trace
- StageInfo backward compatibility (details, accumulables)
- InputMetrics backward compatibility
- Input/Output records backwards compatibility
- Shuffle Read/Write records backwards compatibility
- OutputMetrics backward compatibility
- BlockManager events backward compatibility
- FetchFailed backwards compatibility
- ShuffleReadMetrics: Local bytes read backwards compatibility
- SparkListenerApplicationStart backwards compatibility
- ExecutorLostFailure backward compatibility
- SparkListenerJobStart backward compatibility
- SparkListenerJobStart and SparkListenerJobEnd backward compatibility
- RDDInfo backward compatibility (scope, parent IDs, callsite)
- StageInfo backward compatibility (parent IDs)
- TaskCommitDenied backward compatibility
- AccumulableInfo backward compatibility
- ExceptionFailure backward compatibility: accumulator updates
- ExecutorMetricsUpdate backward compatibility: executor metrics update
- executorMetricsFromJson backward compatibility: handle missing metrics
- AccumulableInfo value de/serialization
BroadcastSuite:
- Using TorrentBroadcast locally
- Accessing TorrentBroadcast variables from multiple threads
- Accessing TorrentBroadcast variables in a local cluster (encryption = off)
- Accessing TorrentBroadcast variables in a local cluster (encryption = on)
- TorrentBroadcast's blockifyObject and unblockifyObject are inverses
- Test Lazy Broadcast variables with TorrentBroadcast
- Unpersisting TorrentBroadcast on executors only in local mode
- Unpersisting TorrentBroadcast on executors and driver in local mode
- Unpersisting TorrentBroadcast on executors only in distributed mode
- Unpersisting TorrentBroadcast on executors and driver in distributed mode
- Using broadcast after destroy prints callsite
- Broadcast variables cannot be created after SparkContext is stopped (SPARK-5065)
- Forbid broadcasting RDD directly
- Cache broadcast to disk (encryption = off)
- Cache broadcast to disk (encryption = on)
- One broadcast value instance per executor
- One broadcast value instance per executor when memory is constrained
TaskSetBlacklistSuite:
- Blacklisting tasks, executors, and nodes
- multiple attempts for the same task count once
- only blacklist nodes for the task set when all the blacklisted executors are all on same host
SerializerPropertiesSuite:
- JavaSerializer does not support relocation
- KryoSerializer supports relocation when auto-reset is enabled
- KryoSerializer does not support relocation when auto-reset is disabled
EventLoopSuite:
- EventLoop
- EventLoop: start and stop
- EventLoop: onError
- EventLoop: error thrown from onError should not crash the event thread
- EventLoop: calling stop multiple times should only call onStop once
- EventLoop: post event in multiple threads
- EventLoop: onReceive swallows InterruptException
- EventLoop: stop in eventThread
- EventLoop: stop() in onStart should call onStop
- EventLoop: stop() in onReceive should call onStop
- EventLoop: stop() in onError should call onStop
ZippedPartitionsSuite:
- print sizes
DiskStoreSuite:
- reads of memory-mapped and non memory-mapped files are equivalent
- block size tracking
- blocks larger than 2gb
- block data encryption
LiveEntitySuite:
- partition seq
DoubleRDDSuite:
- sum
- WorksOnEmpty
- WorksWithOutOfRangeWithOneBucket
- WorksInRangeWithOneBucket
- WorksInRangeWithOneBucketExactMatch
- WorksWithOutOfRangeWithTwoBuckets
- WorksWithOutOfRangeWithTwoUnEvenBuckets
- WorksInRangeWithTwoBuckets
- WorksInRangeWithTwoBucketsAndNaN
- WorksInRangeWithTwoUnevenBuckets
- WorksMixedRangeWithTwoUnevenBuckets
- WorksMixedRangeWithFourUnevenBuckets
- WorksMixedRangeWithUnevenBucketsAndNaN
- WorksMixedRangeWithUnevenBucketsAndNaNAndNaNRange
- WorksMixedRangeWithUnevenBucketsAndNaNAndNaNRangeAndInfinity
- WorksWithOutOfRangeWithInfiniteBuckets
- ThrowsExceptionOnInvalidBucketArray
- WorksWithoutBucketsBasic
- WorksWithoutBucketsBasicSingleElement
- WorksWithoutBucketsBasicNoRange
- WorksWithoutBucketsBasicTwo
- WorksWithDoubleValuesAtMinMax
- WorksWithoutBucketsWithMoreRequestedThanElements
- WorksWithoutBucketsForLargerDatasets
- WorksWithoutBucketsWithNonIntegralBucketEdges
- WorksWithHugeRange
- ThrowsExceptionOnInvalidRDDs
AppStatusStoreSuite:
- quantile calculation: 1 task
- quantile calculation: few tasks
- quantile calculation: more tasks
- quantile calculation: lots of tasks
- quantile calculation: custom quantiles
- quantile cache
- only successfull task have taskSummary
- summary should contain task metrics of only successfull tasks
SorterSuite:
- equivalent to Arrays.sort
- KVArraySorter
- SPARK-5984 TimSort bug
- java.lang.ArrayIndexOutOfBoundsException in TimSort
- Sorter benchmark for key-value pairs !!! IGNORED !!!
- Sorter benchmark for primitive int array !!! IGNORED !!!
MedianHeapSuite:
- If no numbers in MedianHeap, NoSuchElementException is thrown.
- Median should be correct when size of MedianHeap is even
- Median should be correct when size of MedianHeap is odd
- Median should be correct though there are duplicated numbers inside.
- Median should be correct when input data is skewed.
PoolSuite:
- FIFO Scheduler Test
- Fair Scheduler Test
- Nested Pool Test
- SPARK-17663: FairSchedulableBuilder sets default values for blank or invalid datas
- FIFO scheduler uses root pool and not spark.scheduler.pool property
- FAIR Scheduler uses default pool when spark.scheduler.pool property is not set
- FAIR Scheduler creates a new pool when spark.scheduler.pool property points to a non-existent pool
- Pool should throw IllegalArgumentException when schedulingMode is not supported
- Fair Scheduler should build fair scheduler when valid spark.scheduler.allocation.file property is set
- Fair Scheduler should use default file(fairscheduler.xml) if it exists in classpath and spark.scheduler.allocation.file property is not set
- Fair Scheduler should throw FileNotFoundException when invalid spark.scheduler.allocation.file property is set
DistributionSuite:
- summary
ContextCleanerSuite:
- cleanup RDD
- cleanup shuffle
- cleanup broadcast
- automatically cleanup RDD
- automatically cleanup shuffle
- automatically cleanup broadcast
- automatically cleanup normal checkpoint
- automatically clean up local checkpoint
- automatically cleanup RDD + shuffle + broadcast
- automatically cleanup RDD + shuffle + broadcast in distributed mode
JsonProtocolSuite:
- writeApplicationInfo
- writeWorkerInfo
- writeApplicationDescription
- writeExecutorRunner
- writeDriverInfo
- writeMasterState
- writeWorkerState
HeartbeatReceiverSuite:
- task scheduler is set correctly
- normal heartbeat
- reregister if scheduler is not ready yet
- reregister if heartbeat from unregistered executor
- reregister if heartbeat from removed executor
- expire dead hosts
- expire dead hosts should kill executors with replacement (SPARK-8119)
AccumulatorSourceSuite:
- that that accumulators register against the metric system's register
- the accumulators value property is checked when the gauge's value is requested
- the double accumulators value propety is checked when the gauge's value is requested
ReplayListenerSuite:
- Simple replay
- Replay compressed inprogress log file succeeding on partial read
- Replay incompatible event log
- End-to-end replay
- End-to-end replay with compression
UIUtilsSuite:
- makeDescription(plainText = false)
- makeDescription(plainText = true)
- SPARK-11906: Progress bar should not overflow because of speculative tasks
- decodeURLParameter (SPARK-12708: Sorting task error in Stages Page when yarn mode.)
MutableURLClassLoaderSuite:
- child first
- parent first
- child first can fall back
- child first can fail
- default JDK classloader get resources
- parent first get resources
- child first get resources
- driver sets context class loader in local mode
CheckpointSuite:
- basic checkpointing [reliable checkpoint]
- basic checkpointing [local checkpoint]
- checkpointing partitioners [reliable checkpoint]
- RDDs with one-to-one dependencies [reliable checkpoint]
- RDDs with one-to-one dependencies [local checkpoint]
- ParallelCollectionRDD [reliable checkpoint]
- ParallelCollectionRDD [local checkpoint]
- BlockRDD [reliable checkpoint]
- BlockRDD [local checkpoint]
- ShuffleRDD [reliable checkpoint]
- ShuffleRDD [local checkpoint]
- UnionRDD [reliable checkpoint]
- UnionRDD [local checkpoint]
- CartesianRDD [reliable checkpoint]
- CartesianRDD [local checkpoint]
- CoalescedRDD [reliable checkpoint]
- CoalescedRDD [local checkpoint]
- CoGroupedRDD [reliable checkpoint]
- CoGroupedRDD [local checkpoint]
- ZippedPartitionsRDD [reliable checkpoint]
- ZippedPartitionsRDD [local checkpoint]
- PartitionerAwareUnionRDD [reliable checkpoint]
- PartitionerAwareUnionRDD [local checkpoint]
- CheckpointRDD with zero partitions [reliable checkpoint]
- CheckpointRDD with zero partitions [local checkpoint]
- checkpointAllMarkedAncestors [reliable checkpoint]
- checkpointAllMarkedAncestors [local checkpoint]
AppStatusUtilsSuite:
- schedulerDelay
IndexShuffleBlockResolverSuite:
- commit shuffle files multiple times
TaskResultGetterSuite:
- handling results smaller than max RPC message size
- handling results larger than max RPC message size
- task retried if result missing from block manager
- failed task deserialized with the correct classloader (SPARK-11195)
- task result size is set on the driver, not the executors
- failed task is handled when error occurs deserializing the reason
Exception in thread "task-result-getter-0" java.lang.NoClassDefFoundError
	at org.apache.spark.scheduler.UndeserializableException.readObject(TaskResultGetterSuite.scala:269)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1170)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2178)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
	at org.apache.spark.ThrowableSerializationWrapper.readObject(TaskEndReason.scala:193)
	at sun.reflect.GeneratedMethodAccessor207.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1170)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2178)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
	at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:76)
	at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:115)
	at org.apache.spark.scheduler.TaskResultGetter.$anonfun$enqueueFailedTask$2(TaskResultGetter.scala:135)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1930)
	at org.apache.spark.scheduler.TaskResultGetter.$anonfun$enqueueFailedTask$1(TaskResultGetter.scala:131)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
TopologyAwareBlockReplicationPolicyBehavior:
- block replication - random block replication policy
- All peers in the same rack
- Peers in 2 racks
PersistenceEngineSuite:
- FileSystemPersistenceEngine
- ZooKeeperPersistenceEngine
MasterSuite:
- can use a custom recovery mode factory
- master correctly recover the application
- master/worker web ui available
- master/worker web ui available with reverseProxy
- basic scheduling - spread out
- basic scheduling - no spread out
- basic scheduling with more memory - spread out
- basic scheduling with more memory - no spread out
- scheduling with max cores - spread out
- scheduling with max cores - no spread out
- scheduling with cores per executor - spread out
- scheduling with cores per executor - no spread out
- scheduling with cores per executor AND max cores - spread out
- scheduling with cores per executor AND max cores - no spread out
- scheduling with executor limit - spread out
- scheduling with executor limit - no spread out
- scheduling with executor limit AND max cores - spread out
- scheduling with executor limit AND max cores - no spread out
- scheduling with executor limit AND cores per executor - spread out
- scheduling with executor limit AND cores per executor - no spread out
- scheduling with executor limit AND cores per executor AND max cores - spread out
- scheduling with executor limit AND cores per executor AND max cores - no spread out
- SPARK-13604: Master should ask Worker kill unknown executors and drivers
- SPARK-20529: Master should reply the address received from worker
- SPARK-19900: there should be a corresponding driver for the app after relaunching driver
CheckpointCompressionSuite:
- checkpoint compression
ExternalAppendOnlyMapSuite:
- single insert
- multiple insert
- insert with collision
- ordering
- null keys and values
- simple aggregator
- simple cogroup
- spilling
- spilling with compression
- spilling with compression and encryption
- ExternalAppendOnlyMap shouldn't fail when forced to spill before calling its iterator
- spilling with hash collisions
- spilling with many hash collisions
- spilling with hash collisions using the Int.MaxValue key
- spilling with null keys and values
- SPARK-22713 spill during iteration leaks internal map
- drop all references to the underlying map once the iterator is exhausted
- SPARK-22713 external aggregation updates peak execution memory
- force to spill for external aggregation
AdaptiveSchedulingSuite:
- simple use of submitMapStage
- fetching multiple map output partitions per reduce
- fetching all map output partitions in one reduce
- more reduce tasks than map output partitions
GenericAvroSerializerSuite:
- schema compression and decompression
- record serialization and deserialization
- uses schema fingerprint to decrease message size
- caches previously seen schemas
BlacklistIntegrationSuite:
- If preferred node is bad, without blacklist job will fail
- With default settings, job can succeed despite multiple bad executors on node
- Bad node with multiple executors, job will still succeed with the right confs
- SPARK-15865 Progress with fewer executors than maxTaskFailures
AppStatusListenerSuite:
- environment info
- scheduler events
- storage events
- eviction of old data
- eviction should respect job completion time
- eviction should respect stage completion time
- skipped stages should be evicted before completed stages
- eviction should respect task completion time
- lastStageAttempt should fail when the stage doesn't exist
- SPARK-24415: update metrics for tasks that finish late
- Total tasks in the executor summary should match total stage tasks (live = true)
- Total tasks in the executor summary should match total stage tasks (live = false)
- driver logs
- executor metrics updates
- stage executor metrics
- storage information on executor lost/down
BoundedPriorityQueueSuite:
- BoundedPriorityQueue poll test
ProactiveClosureSerializationSuite:
- throws expected serialization exceptions on actions
- mapPartitions transformations throw proactive serialization exceptions
- map transformations throw proactive serialization exceptions
- filter transformations throw proactive serialization exceptions
- flatMap transformations throw proactive serialization exceptions
- mapPartitionsWithIndex transformations throw proactive serialization exceptions
Run completed in 20 minutes, 26 seconds.
Total number of tests run: 2241
Suites: completed 223, aborted 0
Tests: succeeded 2240, failed 1, canceled 1, ignored 7, pending 0
*** 1 TEST FAILED ***
[INFO] 
[INFO] --------------< org.apache.spark:spark-mllib-local_2.12 >---------------
[INFO] Building Spark Project ML Local Library 3.0.0-SNAPSHOT           [10/31]
[INFO] --------------------------------[ jar ]---------------------------------
[INFO] 
[INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-versions) @ spark-mllib-local_2.12 ---
[INFO] 
[INFO] --- mvn-scalafmt_2.12:0.9_1.5.1:format (default) @ spark-mllib-local_2.12 ---
[INFO] Skip flag set, skipping formatting
[INFO] 
[INFO] --- scala-maven-plugin:3.4.4:add-source (eclipse-add-source) @ spark-mllib-local_2.12 ---
[INFO] Add Source directory: /home/jenkins/workspace/spark-master-test-maven-hadoop-2.7-ubuntu-testing/mllib-local/src/main/scala
[INFO] Add Test Source directory: /home/jenkins/workspace/spark-master-test-maven-hadoop-2.7-ubuntu-testing/mllib-local/src/test/scala
[INFO] 
[INFO] --- maven-dependency-plugin:3.1.1:build-classpath (default-cli) @ spark-mllib-local_2.12 ---
[INFO] Dependencies classpath:
/home/jenkins/.m2/repository/org/scala-lang/scala-library/2.12.8/scala-library-2.12.8.jar:/home/jenkins/.m2/repository/org/spark-project/spark/unused/1.0.0/unused-1.0.0.jar:/home/jenkins/.m2/repository/org/spire-math/spire-macros_2.12/0.13.0/spire-macros_2.12-0.13.0.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-reflect/2.12.8/scala-reflect-2.12.8.jar:/home/jenkins/.m2/repository/com/chuusai/shapeless_2.12/2.3.2/shapeless_2.12-2.3.2.jar:/home/jenkins/.m2/repository/org/typelevel/macro-compat_2.12/1.1.1/macro-compat_2.12-1.1.1.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-math3/3.4.1/commons-math3-3.4.1.jar:/home/jenkins/.m2/repository/net/sourceforge/f2j/arpack_combined_all/0.1/arpack_combined_all-0.1.jar:/home/jenkins/.m2/repository/org/scalanlp/breeze-macros_2.12/0.13.2/breeze-macros_2.12-0.13.2.jar:/home/jenkins/.m2/repository/net/sf/opencsv/opencsv/2.3/opencsv-2.3.jar:/home/jenkins/.m2/repository/org/typelevel/machinist_2.12/0.6.1/machinist_2.12-0.6.1.jar:/home/jenkins/.m2/repository/org/spire-math/spire_2.12/0.13.0/spire_2.12-0.13.0.jar:/home/jenkins/workspace/spark-master-test-maven-hadoop-2.7-ubuntu-testing/common/tags/target/scala-2.12/classes:/home/jenkins/.m2/repository/org/scalanlp/breeze_2.12/0.13.2/breeze_2.12-0.13.2.jar:/home/jenkins/.m2/repository/org/slf4j/slf4j-api/1.7.16/slf4j-api-1.7.16.jar:/home/jenkins/.m2/repository/com/github/rwl/jtransforms/2.4.0/jtransforms-2.4.0.jar:/home/jenkins/.m2/repository/com/github/fommil/netlib/core/1.1.2/core-1.1.2.jar
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (process-resource-bundles) @ spark-mllib-local_2.12 ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ spark-mllib-local_2.12 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/jenkins/workspace/spark-master-test-maven-hadoop-2.7-ubuntu-testing/mllib-local/src/main/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.8.0:compile (default-compile) @ spark-mllib-local_2.12 ---
[INFO] Not compiling main sources
[INFO] 
[INFO] --- scala-maven-plugin:3.4.4:compile (scala-compile-first) @ spark-mllib-local_2.12 ---
[INFO] Using zinc server for incremental compilation
[INFO] Toolchain in scala-maven-plugin: /usr/lib/jvm/java-8-oracle
[info] Compiling 5 Scala sources to /home/jenkins/workspace/spark-master-test-maven-hadoop-2.7-ubuntu-testing/mllib-local/target/scala-2.12/classes...
[info] Compile success at Apr 19, 2019 4:07:31 AM [1.888s]
[INFO] 
[INFO] --- maven-antrun-plugin:1.8:run (create-tmp-dir) @ spark-mllib-local_2.12 ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ spark-mllib-local_2.12 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/jenkins/workspace/spark-master-test-maven-hadoop-2.7-ubuntu-testing/mllib-local/src/test/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.8.0:testCompile (default-testCompile) @ spark-mllib-local_2.12 ---
[INFO] Not compiling test sources
[INFO] 
[INFO] --- maven-dependency-plugin:3.1.1:build-classpath (generate-test-classpath) @ spark-mllib-local_2.12 ---
[INFO] Dependencies classpath:
/home/jenkins/.m2/repository/org/scala-lang/modules/scala-xml_2.12/1.0.6/scala-xml_2.12-1.0.6.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-reflect/2.12.8/scala-reflect-2.12.8.jar:/home/jenkins/.m2/repository/com/novocode/junit-interface/0.11/junit-interface-0.11.jar:/home/jenkins/.m2/repository/com/chuusai/shapeless_2.12/2.3.2/shapeless_2.12-2.3.2.jar:/home/jenkins/.m2/repository/org/scalanlp/breeze-macros_2.12/0.13.2/breeze-macros_2.12-0.13.2.jar:/home/jenkins/.m2/repository/net/sf/opencsv/opencsv/2.3/opencsv-2.3.jar:/home/jenkins/.m2/repository/net/bytebuddy/byte-buddy/1.9.3/byte-buddy-1.9.3.jar:/home/jenkins/.m2/repository/org/typelevel/machinist_2.12/0.6.1/machinist_2.12-0.6.1.jar:/home/jenkins/.m2/repository/net/bytebuddy/byte-buddy-agent/1.9.3/byte-buddy-agent-1.9.3.jar:/home/jenkins/workspace/spark-master-test-maven-hadoop-2.7-ubuntu-testing/common/tags/target/scala-2.12/classes:/home/jenkins/.m2/repository/org/mockito/mockito-core/2.23.4/mockito-core-2.23.4.jar:/home/jenkins/.m2/repository/org/objenesis/objenesis/2.6/objenesis-2.6.jar:/home/jenkins/.m2/repository/org/slf4j/slf4j-api/1.7.16/slf4j-api-1.7.16.jar:/home/jenkins/.m2/repository/com/github/fommil/netlib/core/1.1.2/core-1.1.2.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-library/2.12.8/scala-library-2.12.8.jar:/home/jenkins/.m2/repository/org/scalactic/scalactic_2.12/3.0.5/scalactic_2.12-3.0.5.jar:/home/jenkins/.m2/repository/org/scalacheck/scalacheck_2.12/1.13.5/scalacheck_2.12-1.13.5.jar:/home/jenkins/.m2/repository/org/spark-project/spark/unused/1.0.0/unused-1.0.0.jar:/home/jenkins/.m2/repository/org/spire-math/spire-macros_2.12/0.13.0/spire-macros_2.12-0.13.0.jar:/home/jenkins/.m2/repository/org/scala-sbt/test-interface/1.0/test-interface-1.0.jar:/home/jenkins/.m2/repository/org/hamcrest/hamcrest-core/1.3/hamcrest-core-1.3.jar:/home/jenkins/.m2/repository/org/typelevel/macro-compat_2.12/1.1.1/macro-compat_2.12-1.1.1.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-math3/3.4.1/commons-math3-3.4.1.jar:/home/jenkins/.m2/repository/net/sourceforge/f2j/arpack_combined_all/0.1/arpack_combined_all-0.1.jar:/home/jenkins/.m2/repository/junit/junit/4.12/junit-4.12.jar:/home/jenkins/.m2/repository/org/spire-math/spire_2.12/0.13.0/spire_2.12-0.13.0.jar:/home/jenkins/.m2/repository/org/scalanlp/breeze_2.12/0.13.2/breeze_2.12-0.13.2.jar:/home/jenkins/workspace/spark-master-test-maven-hadoop-2.7-ubuntu-testing/common/tags/target/scala-2.12/test-classes:/home/jenkins/.m2/repository/org/scalatest/scalatest_2.12/3.0.5/scalatest_2.12-3.0.5.jar:/home/jenkins/.m2/repository/com/github/rwl/jtransforms/2.4.0/jtransforms-2.4.0.jar
[INFO] 
[INFO] --- scala-maven-plugin:3.4.4:testCompile (scala-test-compile-first) @ spark-mllib-local_2.12 ---
[INFO] Using zinc server for incremental compilation
[INFO] Toolchain in scala-maven-plugin: /usr/lib/jvm/java-8-oracle
[info] Compile success at Apr 19, 2019 4:07:32 AM [0.095s]
[INFO] 
[INFO] --- maven-surefire-plugin:3.0.0-M2:test (default-test) @ spark-mllib-local_2.12 ---
[INFO] 
[INFO] -------------------------------------------------------
[INFO]  T E S T S
[INFO] -------------------------------------------------------
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
[INFO] 
[INFO] 
[INFO] --- maven-surefire-plugin:3.0.0-M2:test (test) @ spark-mllib-local_2.12 ---
[INFO] Skipping execution of surefire because it has already been run for this configuration
[INFO] 
[INFO] --- scalatest-maven-plugin:2.0.0:test (test) @ spark-mllib-local_2.12 ---
Discovery starting.
Discovery completed in 227 milliseconds.
Run starting. Expected test count is: 85
BLASSuite:
- copy
Apr 19, 2019 4:07:34 AM com.github.fommil.netlib.BLAS <clinit>
WARNING: Failed to load implementation from: com.github.fommil.netlib.NativeSystemBLAS
Apr 19, 2019 4:07:34 AM com.github.fommil.netlib.BLAS <clinit>
WARNING: Failed to load implementation from: com.github.fommil.netlib.NativeRefBLAS
- scal
- axpy
- dot
- spr
- syr
- gemm
- gemv
- spmv
UtilsSuite:
- EPSILON
TestingUtilsSuite:
- Comparing doubles using relative error.
- Comparing doubles using absolute error.
- Comparing vectors using relative error.
- Comparing vectors using absolute error.
- Comparing Matrices using absolute error.
- Comparing Matrices using relative error.
BreezeMatrixConversionSuite:
- dense matrix to breeze
- dense breeze matrix to matrix
- sparse matrix to breeze
- sparse breeze matrix to sparse matrix
BreezeVectorConversionSuite:
- dense to breeze
- sparse to breeze
- dense breeze to vector
- sparse breeze to vector
- sparse breeze with partially-used arrays to vector
MultivariateGaussianSuite:
Apr 19, 2019 4:07:35 AM com.github.fommil.netlib.LAPACK <clinit>
WARNING: Failed to load implementation from: com.github.fommil.netlib.NativeSystemLAPACK
Apr 19, 2019 4:07:35 AM com.github.fommil.netlib.LAPACK <clinit>
WARNING: Failed to load implementation from: com.github.fommil.netlib.NativeRefLAPACK
- univariate
- multivariate
- multivariate degenerate
- SPARK-11302
MatricesSuite:
- dense matrix construction
- dense matrix construction with wrong dimension
- sparse matrix construction
- sparse matrix construction with wrong number of elements
- index in matrices incorrect input
- equals
- matrix copies are deep copies
- matrix indexing and updating
- dense to dense
- dense to sparse
- sparse to sparse
- sparse to dense
- compressed dense
- compressed sparse
- map, update
- transpose
- foreachActive
- horzcat, vertcat, eye, speye
- zeros
- ones
- eye
- rand
- randn
- diag
- sprand
- sprandn
- toString
- numNonzeros and numActives
- fromBreeze with sparse matrix
- row/col iterator
VectorsSuite:
- dense vector construction with varargs
- dense vector construction from a double array
- sparse vector construction
- sparse vector construction with unordered elements
- sparse vector construction with mismatched indices/values array
- sparse vector construction with too many indices vs size
- sparse vector construction with negative indices
- dense to array
- dense argmax
- sparse to array
- sparse argmax
- vector equals
- vectors equals with explicit 0
- indexing dense vectors
- indexing sparse vectors
- zeros
- Vector.copy
- fromBreeze
- sqdist
- foreachActive
- vector p-norm
- Vector numActive and numNonzeros
- Vector toSparse and toDense
- Vector.compressed
- SparseVector.slice
- sparse vector only support non-negative length
Run completed in 2 seconds, 385 milliseconds.
Total number of tests run: 85
Suites: completed 9, aborted 0
Tests: succeeded 85, failed 0, canceled 0, ignored 0, pending 0
All tests passed.
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project GraphX
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project Streaming
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project Catalyst
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project SQL
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project ML Library
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] -----------------< org.apache.spark:spark-tools_2.12 >------------------
[INFO] Building Spark Project Tools 3.0.0-SNAPSHOT                      [11/31]
[INFO] --------------------------------[ jar ]---------------------------------
[INFO] 
[INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-versions) @ spark-tools_2.12 ---
[INFO] 
[INFO] --- mvn-scalafmt_2.12:0.9_1.5.1:format (default) @ spark-tools_2.12 ---
[INFO] Skip flag set, skipping formatting
[INFO] 
[INFO] --- scala-maven-plugin:3.4.4:add-source (eclipse-add-source) @ spark-tools_2.12 ---
[INFO] Add Source directory: /home/jenkins/workspace/spark-master-test-maven-hadoop-2.7-ubuntu-testing/tools/src/main/scala
[INFO] Add Test Source directory: /home/jenkins/workspace/spark-master-test-maven-hadoop-2.7-ubuntu-testing/tools/src/test/scala
[INFO] 
[INFO] --- maven-dependency-plugin:3.1.1:build-classpath (default-cli) @ spark-tools_2.12 ---
[INFO] Dependencies classpath:
/home/jenkins/.m2/repository/org/scala-lang/scala-library/2.12.8/scala-library-2.12.8.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-compiler/2.12.8/scala-compiler-2.12.8.jar:/home/jenkins/.m2/repository/org/clapper/classutil_2.12/1.1.2/classutil_2.12-1.1.2.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm-tree/5.1/asm-tree-5.1.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm-util/5.1/asm-util-5.1.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-xml_2.12/1.0.6/scala-xml_2.12-1.0.6.jar:/home/jenkins/.m2/repository/org/spark-project/spark/unused/1.0.0/unused-1.0.0.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-reflect/2.12.8/scala-reflect-2.12.8.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm-commons/5.1/asm-commons-5.1.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm/5.1/asm-5.1.jar:/home/jenkins/.m2/repository/org/clapper/grizzled-scala_2.12/4.2.0/grizzled-scala_2.12-4.2.0.jar
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (process-resource-bundles) @ spark-tools_2.12 ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ spark-tools_2.12 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/jenkins/workspace/spark-master-test-maven-hadoop-2.7-ubuntu-testing/tools/src/main/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.8.0:compile (default-compile) @ spark-tools_2.12 ---
[INFO] Not compiling main sources
[INFO] 
[INFO] --- scala-maven-plugin:3.4.4:compile (scala-compile-first) @ spark-tools_2.12 ---
[INFO] Using zinc server for incremental compilation
[INFO] Toolchain in scala-maven-plugin: /usr/lib/jvm/java-8-oracle
[info] Compile success at Apr 19, 2019 4:07:36 AM [0.026s]
[INFO] 
[INFO] --- maven-antrun-plugin:1.8:run (create-tmp-dir) @ spark-tools_2.12 ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ spark-tools_2.12 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/jenkins/workspace/spark-master-test-maven-hadoop-2.7-ubuntu-testing/tools/src/test/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.8.0:testCompile (default-testCompile) @ spark-tools_2.12 ---
[INFO] Not compiling test sources
[INFO] 
[INFO] --- maven-dependency-plugin:3.1.1:build-classpath (generate-test-classpath) @ spark-tools_2.12 ---
[INFO] Dependencies classpath:
/home/jenkins/.m2/repository/org/scala-lang/scala-library/2.12.8/scala-library-2.12.8.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm-tree/5.1/asm-tree-5.1.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-xml_2.12/1.0.6/scala-xml_2.12-1.0.6.jar:/home/jenkins/.m2/repository/org/scalactic/scalactic_2.12/3.0.5/scalactic_2.12-3.0.5.jar:/home/jenkins/.m2/repository/org/spark-project/spark/unused/1.0.0/unused-1.0.0.jar:/home/jenkins/.m2/repository/org/scala-sbt/test-interface/1.0/test-interface-1.0.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-reflect/2.12.8/scala-reflect-2.12.8.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm-commons/5.1/asm-commons-5.1.jar:/home/jenkins/.m2/repository/com/novocode/junit-interface/0.11/junit-interface-0.11.jar:/home/jenkins/.m2/repository/org/hamcrest/hamcrest-core/1.3/hamcrest-core-1.3.jar:/home/jenkins/.m2/repository/org/clapper/grizzled-scala_2.12/4.2.0/grizzled-scala_2.12-4.2.0.jar:/home/jenkins/.m2/repository/junit/junit/4.12/junit-4.12.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-compiler/2.12.8/scala-compiler-2.12.8.jar:/home/jenkins/.m2/repository/org/clapper/classutil_2.12/1.1.2/classutil_2.12-1.1.2.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm-util/5.1/asm-util-5.1.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest_2.12/3.0.5/scalatest_2.12-3.0.5.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm/5.1/asm-5.1.jar
[INFO] 
[INFO] --- scala-maven-plugin:3.4.4:testCompile (scala-test-compile-first) @ spark-tools_2.12 ---
[INFO] No sources to compile
[INFO] 
[INFO] --- maven-surefire-plugin:3.0.0-M2:test (default-test) @ spark-tools_2.12 ---
[INFO] 
[INFO] --- maven-surefire-plugin:3.0.0-M2:test (test) @ spark-tools_2.12 ---
[INFO] Skipping execution of surefire because it has already been run for this configuration
[INFO] 
[INFO] --- scalatest-maven-plugin:2.0.0:test (test) @ spark-tools_2.12 ---
Discovery starting.
Discovery completed in 71 milliseconds.
Run starting. Expected test count is: 0
DiscoverySuite:
Run completed in 122 milliseconds.
Total number of tests run: 0
Suites: completed 1, aborted 0
Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
No tests were executed.
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project Hive
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project REPL
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --------------< org.apache.spark:spark-network-yarn_2.12 >--------------
[INFO] Building Spark Project YARN Shuffle Service 3.0.0-SNAPSHOT       [12/31]
[INFO] --------------------------------[ jar ]---------------------------------
[INFO] 
[INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-versions) @ spark-network-yarn_2.12 ---
[INFO] 
[INFO] --- mvn-scalafmt_2.12:0.9_1.5.1:format (default) @ spark-network-yarn_2.12 ---
[INFO] Skip flag set, skipping formatting
[INFO] 
[INFO] --- scala-maven-plugin:3.4.4:add-source (eclipse-add-source) @ spark-network-yarn_2.12 ---
[INFO] Add Source directory: /home/jenkins/workspace/spark-master-test-maven-hadoop-2.7-ubuntu-testing/common/network-yarn/src/main/scala
[INFO] Add Test Source directory: /home/jenkins/workspace/spark-master-test-maven-hadoop-2.7-ubuntu-testing/common/network-yarn/src/test/scala
[INFO] 
[INFO] --- maven-dependency-plugin:3.1.1:build-classpath (default-cli) @ spark-network-yarn_2.12 ---
[INFO] Dependencies classpath:
/home/jenkins/.m2/repository/org/apache/commons/commons-crypto/1.0.0/commons-crypto-1.0.0.jar:/home/jenkins/.m2/repository/org/spark-project/spark/unused/1.0.0/unused-1.0.0.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-lang3/3.8.1/commons-lang3-3.8.1.jar:/home/jenkins/workspace/spark-master-test-maven-hadoop-2.7-ubuntu-testing/common/network-shuffle/target/scala-2.12/classes:/home/jenkins/.m2/repository/io/netty/netty-all/4.1.30.Final/netty-all-4.1.30.Final.jar:/home/jenkins/workspace/spark-master-test-maven-hadoop-2.7-ubuntu-testing/common/network-common/target/scala-2.12/classes:/home/jenkins/.m2/repository/com/thoughtworks/paranamer/paranamer/2.8/paranamer-2.8.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.9.8/jackson-annotations-2.9.8.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar:/home/jenkins/.m2/repository/io/dropwizard/metrics/metrics-core/3.1.5/metrics-core-3.1.5.jar:/home/jenkins/.m2/repository/com/google/code/findbugs/jsr305/3.0.0/jsr305-3.0.0.jar:/home/jenkins/.m2/repository/org/fusesource/leveldbjni/leveldbjni-all/1.8/leveldbjni-all-1.8.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.9.8/jackson-core-2.9.8.jar
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (process-resource-bundles) @ spark-network-yarn_2.12 ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ spark-network-yarn_2.12 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/jenkins/workspace/spark-master-test-maven-hadoop-2.7-ubuntu-testing/common/network-yarn/src/main/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.8.0:compile (default-compile) @ spark-network-yarn_2.12 ---
[INFO] Not compiling main sources
[INFO] 
[INFO] --- scala-maven-plugin:3.4.4:compile (scala-compile-first) @ spark-network-yarn_2.12 ---
[INFO] Using zinc server for incremental compilation
[INFO] Toolchain in scala-maven-plugin: /usr/lib/jvm/java-8-oracle
[info] Compiling 3 Java sources to /home/jenkins/workspace/spark-master-test-maven-hadoop-2.7-ubuntu-testing/common/network-yarn/target/scala-2.12/classes...
[info] Compile success at Apr 19, 2019 4:07:40 AM [1.726s]
[INFO] 
[INFO] --- maven-antrun-plugin:1.8:run (create-tmp-dir) @ spark-network-yarn_2.12 ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ spark-network-yarn_2.12 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/jenkins/workspace/spark-master-test-maven-hadoop-2.7-ubuntu-testing/common/network-yarn/src/test/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.8.0:testCompile (default-testCompile) @ spark-network-yarn_2.12 ---
[INFO] Not compiling test sources
[INFO] 
[INFO] --- maven-dependency-plugin:3.1.1:build-classpath (generate-test-classpath) @ spark-network-yarn_2.12 ---
[INFO] Dependencies classpath:
/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-common/2.7.4/hadoop-mapreduce-client-common-2.7.4.jar:/home/jenkins/.m2/repository/log4j/log4j/1.2.17/log4j-1.2.17.jar:/home/jenkins/.m2/repository/com/novocode/junit-interface/0.11/junit-interface-0.11.jar:/home/jenkins/workspace/spark-master-test-maven-hadoop-2.7-ubuntu-testing/common/network-shuffle/target/scala-2.12/classes:/home/jenkins/.m2/repository/io/netty/netty-all/4.1.30.Final/netty-all-4.1.30.Final.jar:/home/jenkins/.m2/repository/org/apache/htrace/htrace-core/3.1.0-incubating/htrace-core-3.1.0-incubating.jar:/home/jenkins/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar:/home/jenkins/.m2/repository/org/apache/httpcomponents/httpcore/4.4.10/httpcore-4.4.10.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-jobclient/2.7.4/hadoop-mapreduce-client-jobclient-2.7.4.jar:/home/jenkins/.m2/repository/org/apache/curator/curator-framework/2.7.1/curator-framework-2.7.1.jar:/home/jenkins/.m2/repository/org/apache/curator/curator-recipes/2.7.1/curator-recipes-2.7.1.jar:/home/jenkins/.m2/repository/commons-beanutils/commons-beanutils/1.9.3/commons-beanutils-1.9.3.jar:/home/jenkins/.m2/repository/com/thoughtworks/paranamer/paranamer/2.8/paranamer-2.8.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar:/home/jenkins/.m2/repository/org/apache/avro/avro/1.8.2/avro-1.8.2.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-compress/1.4.1/commons-compress-1.4.1.jar:/home/jenkins/.m2/repository/org/apache/directory/server/apacheds-kerberos-codec/2.0.0-M15/apacheds-kerberos-codec-2.0.0-M15.jar:/home/jenkins/.m2/repository/io/dropwizard/metrics/metrics-core/3.1.5/metrics-core-3.1.5.jar:/home/jenkins/.m2/repository/com/google/code/findbugs/jsr305/3.0.0/jsr305-3.0.0.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-client/2.7.4/hadoop-client-2.7.4.jar:/home/jenkins/.m2/repository/org/xerial/snappy/snappy-java/1.1.7.3/snappy-java-1.1.7.3.jar:/home/jenkins/.m2/repository/org/fusesource/leveldbjni/leveldbjni-all/1.8/leveldbjni-all-1.8.jar:/home/jenkins/.m2/repository/xml-apis/xml-apis/1.4.01/xml-apis-1.4.01.jar:/home/jenkins/.m2/repository/org/slf4j/slf4j-api/1.7.16/slf4j-api-1.7.16.jar:/home/jenkins/.m2/repository/org/tukaani/xz/1.5/xz-1.5.jar:/home/jenkins/.m2/repository/org/mortbay/jetty/jetty-util/6.1.26/jetty-util-6.1.26.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-yarn-api/2.7.4/hadoop-yarn-api-2.7.4.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-crypto/1.0.0/commons-crypto-1.0.0.jar:/home/jenkins/.m2/repository/org/scala-sbt/test-interface/1.0/test-interface-1.0.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-common/2.7.4/hadoop-common-2.7.4.jar:/home/jenkins/.m2/repository/commons-io/commons-io/2.4/commons-io-2.4.jar:/home/jenkins/.m2/repository/org/hamcrest/hamcrest-core/1.3/hamcrest-core-1.3.jar:/home/jenkins/workspace/spark-master-test-maven-hadoop-2.7-ubuntu-testing/common/network-common/target/scala-2.12/classes:/home/jenkins/.m2/repository/javax/servlet/jsp/jsp-api/2.1/jsp-api-2.1.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-yarn-server-common/2.7.4/hadoop-yarn-server-common-2.7.4.jar:/home/jenkins/.m2/repository/org/codehaus/jackson/jackson-jaxrs/1.9.13/jackson-jaxrs-1.9.13.jar:/home/jenkins/.m2/repository/io/netty/netty/3.9.9.Final/netty-3.9.9.Final.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-yarn-client/2.7.4/hadoop-yarn-client-2.7.4.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-app/2.7.4/hadoop-mapreduce-client-app-2.7.4.jar:/home/jenkins/.m2/repository/xmlenc/xmlenc/0.52/xmlenc-0.52.jar:/home/jenkins/.m2/repository/org/codehaus/jackson/jackson-core-asl/1.9.13/jackson-core-asl-1.9.13.jar:/home/jenkins/.m2/repository/org/apache/directory/api/api-util/1.0.0-M20/api-util-1.0.0-M20.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest_2.12/3.0.5/scalatest_2.12-3.0.5.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-yarn-common/2.7.4/hadoop-yarn-common-2.7.4.jar:/home/jenkins/.m2/repository/com/google/code/gson/gson/2.2.4/gson-2.2.4.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.9.8/jackson-core-2.9.8.jar:/home/jenkins/.m2/repository/org/apache/directory/api/api-asn1-api/1.0.0-M20/api-asn1-api-1.0.0-M20.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-annotations/2.7.4/hadoop-annotations-2.7.4.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-xml_2.12/1.0.6/scala-xml_2.12-1.0.6.jar:/home/jenkins/.m2/repository/commons-net/commons-net/3.1/commons-net-3.1.jar:/home/jenkins/.m2/repository/commons-configuration/commons-configuration/1.6/commons-configuration-1.6.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-reflect/2.12.8/scala-reflect-2.12.8.jar:/home/jenkins/.m2/repository/commons-digester/commons-digester/1.8/commons-digester-1.8.jar:/home/jenkins/.m2/repository/org/apache/directory/server/apacheds-i18n/2.0.0-M15/apacheds-i18n-2.0.0-M15.jar:/home/jenkins/.m2/repository/javax/xml/bind/jaxb-api/2.2.2/jaxb-api-2.2.2.jar:/home/jenkins/.m2/repository/commons-lang/commons-lang/2.6/commons-lang-2.6.jar:/home/jenkins/.m2/repository/org/mortbay/jetty/jetty-sslengine/6.1.26/jetty-sslengine-6.1.26.jar:/home/jenkins/workspace/spark-master-test-maven-hadoop-2.7-ubuntu-testing/common/tags/target/scala-2.12/classes:/home/jenkins/.m2/repository/org/apache/curator/curator-client/2.7.1/curator-client-2.7.1.jar:/home/jenkins/.m2/repository/org/apache/zookeeper/zookeeper/3.4.6/zookeeper-3.4.6.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.7.4/hadoop-hdfs-2.7.4.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-auth/2.7.4/hadoop-auth-2.7.4.jar:/home/jenkins/.m2/repository/javax/activation/activation/1.1.1/activation-1.1.1.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-shuffle/2.7.4/hadoop-mapreduce-client-shuffle-2.7.4.jar:/home/jenkins/.m2/repository/commons-cli/commons-cli/1.2/commons-cli-1.2.jar:/home/jenkins/.m2/repository/commons-codec/commons-codec/1.10/commons-codec-1.10.jar:/home/jenkins/.m2/repository/com/google/guava/guava/14.0.1/guava-14.0.1.jar:/home/jenkins/.m2/repository/xerces/xercesImpl/2.9.1/xercesImpl-2.9.1.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-library/2.12.8/scala-library-2.12.8.jar:/home/jenkins/.m2/repository/org/slf4j/slf4j-log4j12/1.7.16/slf4j-log4j12-1.7.16.jar:/home/jenkins/.m2/repository/org/scalactic/scalactic_2.12/3.0.5/scalactic_2.12-3.0.5.jar:/home/jenkins/.m2/repository/org/spark-project/spark/unused/1.0.0/unused-1.0.0.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-lang3/3.8.1/commons-lang3-3.8.1.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.4/hadoop-mapreduce-client-core-2.7.4.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-math3/3.4.1/commons-math3-3.4.1.jar:/home/jenkins/.m2/repository/commons-collections/commons-collections/3.2.2/commons-collections-3.2.2.jar:/home/jenkins/.m2/repository/javax/xml/stream/stax-api/1.0-2/stax-api-1.0-2.jar:/home/jenkins/.m2/repository/junit/junit/4.12/junit-4.12.jar:/home/jenkins/.m2/repository/org/codehaus/jackson/jackson-xc/1.9.13/jackson-xc-1.9.13.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.9.8/jackson-annotations-2.9.8.jar:/home/jenkins/.m2/repository/org/apache/httpcomponents/httpclient/4.5.6/httpclient-4.5.6.jar:/home/jenkins/workspace/spark-master-test-maven-hadoop-2.7-ubuntu-testing/common/tags/target/scala-2.12/test-classes:/home/jenkins/.m2/repository/commons-httpclient/commons-httpclient/3.1/commons-httpclient-3.1.jar
[INFO] 
[INFO] --- scala-maven-plugin:3.4.4:testCompile (scala-test-compile-first) @ spark-network-yarn_2.12 ---
[INFO] No sources to compile
[INFO] 
[INFO] --- maven-surefire-plugin:3.0.0-M2:test (default-test) @ spark-network-yarn_2.12 ---
[INFO] 
[INFO] --- maven-surefire-plugin:3.0.0-M2:test (test) @ spark-network-yarn_2.12 ---
[INFO] Skipping execution of surefire because it has already been run for this configuration
[INFO] 
[INFO] --- scalatest-maven-plugin:2.0.0:test (test) @ spark-network-yarn_2.12 ---
Discovery starting.
Discovery completed in 88 milliseconds.
Run starting. Expected test count is: 0
DiscoverySuite:
Run completed in 146 milliseconds.
Total number of tests run: 0
Suites: completed 1, aborted 0
Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
No tests were executed.
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project YARN
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project Mesos
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project Hive Thrift Server
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project Assembly
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Kafka 0.10+ Token Provider for Streaming
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Integration for Kafka 0.10
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Kafka 0.10+ Source for Structured Streaming
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Kinesis Integration
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project Examples
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Integration for Kafka 0.10 Assembly
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Avro
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project Kinesis Assembly
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for Spark Project Parent POM 3.0.0-SNAPSHOT:
[INFO] 
[INFO] Spark Project Parent POM ........................... SUCCESS [  3.392 s]
[INFO] Spark Project Tags ................................. SUCCESS [  3.235 s]
[INFO] Spark Project Sketch ............................... SUCCESS [ 19.892 s]
[INFO] Spark Project Local DB ............................. SUCCESS [  3.596 s]
[INFO] Spark Project Networking ........................... SUCCESS [ 54.068 s]
[INFO] Spark Project Shuffle Streaming Service ............ SUCCESS [ 11.152 s]
[INFO] Spark Project Unsafe ............................... SUCCESS [  3.789 s]
[INFO] Spark Project Launcher ............................. SUCCESS [  3.925 s]
[INFO] Spark Project Core ................................. FAILURE [22:20 min]
[INFO] Spark Project ML Local Library ..................... SUCCESS [  6.302 s]
[INFO] Spark Project GraphX ............................... SKIPPED
[INFO] Spark Project Streaming ............................ SKIPPED
[INFO] Spark Project Catalyst ............................. SKIPPED
[INFO] Spark Project SQL .................................. SKIPPED
[INFO] Spark Project ML Library ........................... SKIPPED
[INFO] Spark Project Tools ................................ SUCCESS [  1.206 s]
[INFO] Spark Project Hive ................................. SKIPPED
[INFO] Spark Project REPL ................................. SKIPPED
[INFO] Spark Project YARN Shuffle Service ................. SUCCESS [  3.801 s]
[INFO] Spark Project YARN ................................. SKIPPED
[INFO] Spark Project Mesos ................................ SKIPPED
[INFO] Spark Project Hive Thrift Server ................... SKIPPED
[INFO] Spark Project Assembly ............................. SKIPPED
[INFO] Kafka 0.10+ Token Provider for Streaming ........... SKIPPED
[INFO] Spark Integration for Kafka 0.10 ................... SKIPPED
[INFO] Kafka 0.10+ Source for Structured Streaming ........ SKIPPED
[INFO] Spark Kinesis Integration .......................... SKIPPED
[INFO] Spark Project Examples ............................. SKIPPED
[INFO] Spark Integration for Kafka 0.10 Assembly .......... SKIPPED
[INFO] Spark Avro ......................................... SKIPPED
[INFO] Spark Project Kinesis Assembly ..................... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  24:15 min
[INFO] Finished at: 2019-04-19T04:07:41-07:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.scalatest:scalatest-maven-plugin:2.0.0:test (test) on project spark-core_2.12: There are test failures -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :spark-core_2.12
+ retcode2=1
+ [[ 0 -ne 0 ]]
+ [[ 1 -ne 0 ]]
+ [[ 0 -ne 0 ]]
+ [[ 1 -ne 0 ]]
+ echo 'Testing Spark with Maven failed'
Testing Spark with Maven failed
+ exit 1
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Finished: FAILURE