FailedConsole Output

Skipping 1,251 KB.. Full Log
- fold
- fold with op modifying first arg
- aggregate
- treeAggregate
- treeAggregate with ops modifying first args
- treeReduce
- basic caching
- caching with failures
- empty RDD
- repartitioned RDDs
- repartitioned RDDs perform load balancing
- coalesced RDDs
- coalesced RDDs with locality
- coalesced RDDs with partial locality
- coalesced RDDs with locality, large scale (10K partitions)
- coalesced RDDs with partial locality, large scale (10K partitions)
- coalesced RDDs with locality, fail first pass
- zipped RDDs
- partition pruning
- collect large number of empty partitions
- take
- top with predefined ordering
- top with custom ordering
- takeOrdered with predefined ordering
- takeOrdered with limit 0
- takeOrdered with custom ordering
- isEmpty
- sample preserves partitioner
- takeSample
- takeSample from an empty rdd
- randomSplit
- runJob on an invalid partition
- sort an empty RDD
- sortByKey
- sortByKey ascending parameter
- sortByKey with explicit ordering
- repartitionAndSortWithinPartitions
- cartesian on empty RDD
- cartesian on non-empty RDDs
- intersection
- intersection strips duplicates in an input
- zipWithIndex
- zipWithIndex with a single partition
- zipWithIndex chained with other RDDs (SPARK-4433)
- zipWithUniqueId
- retag with implicit ClassTag
- parent method
- getNarrowAncestors
- getNarrowAncestors with multiple parents
- getNarrowAncestors with cycles
- task serialization exception should not hang scheduler
- RDD.partitions() fails fast when partitions indicies are incorrect (SPARK-13021)
- nested RDDs are not supported (SPARK-5063)
- actions cannot be performed inside of transformations (SPARK-5063)
- custom RDD coalescer
- SPARK-18406: race between end-of-task and completion iterator read lock release
- SPARK-23496: order of input partitions can result in severe skew in coalesce
- cannot run actions after SparkContext has been stopped (SPARK-5063)
- cannot call methods on a stopped SparkContext (SPARK-5063)
ExecutorSuite:
- SPARK-15963: Catch `TaskKilledException` correctly in Executor.TaskRunner
- SPARK-19276: Handle FetchFailedExceptions that are hidden by user exceptions
- Executor's worker threads should be UninterruptibleThread
- SPARK-19276: OOMs correctly handled with a FetchFailure
- SPARK-23816: interrupts are not masked by a FetchFailure
- Gracefully handle error in task deserialization
- Heartbeat should drop zero accumulator updates
- Heartbeat should not drop zero accumulator updates when the conf is disabled
SerDeUtilSuite:
- Converting an empty pair RDD to python does not throw an exception (SPARK-5441)
- Converting an empty python RDD to pair RDD does not throw an exception (SPARK-5441)
UtilsSuite:
- truncatedString
- timeConversion
- Test byteString conversion
- bytesToString
- copyStream
- memoryStringToMb
- splitCommandString
- string formatting of time durations
- reading offset bytes of a file
- reading offset bytes of a file (compressed)
- reading offset bytes across multiple files
- reading offset bytes across multiple files (compressed)
- deserialize long value
- writeByteBuffer should not change ByteBuffer position
- get iterator size
- getIteratorZipWithIndex
- doesDirectoryContainFilesNewerThan
- resolveURI
- resolveURIs with multiple paths
- nonLocalPaths
- isBindCollision
- log4j log level change
- deleteRecursively
- loading properties from file
- timeIt with prepare
- fetch hcfs dir
- shutdown hook manager
- isInDirectory
- circular buffer: if nothing was written to the buffer, display nothing
- circular buffer: if the buffer isn't full, print only the contents written
- circular buffer: data written == size of the buffer
- circular buffer: multiple overflow
- nanSafeCompareDoubles
- nanSafeCompareFloats
- isDynamicAllocationEnabled
- getDynamicAllocationInitialExecutors
- Set Spark CallerContext
- encodeFileNameToURIRawPath
- decodeFileNameInURI
- Kill process
- chi square test of randomizeInPlace
- redact sensitive information
- tryWithSafeFinally
- tryWithSafeFinallyAndFailureCallbacks
- load extensions
- check Kubernetes master URL
- Safe getSimpleName
- stringHalfWidth
- trimExceptCRLF standalone
PagedDataSourceSuite:
- basic
SortingSuite:
- sortByKey
- large array
- large array with one split
- large array with many partitions
- sort descending
- sort descending with one split
- sort descending with many partitions
- more partitions than elements
- empty RDD
- partition balancing
- partition balancing for descending sort
- get a range of elements in a sorted RDD that is on one partition
- get a range of elements over multiple partitions in a descendingly sorted RDD
- get a range of elements in an array not partitioned by a range partitioner
- get a range of elements over multiple partitions but not taking up full partitions
RpcAddressSuite:
- hostPort
- fromSparkURL
- fromSparkURL: a typo url
- fromSparkURL: invalid scheme
- toSparkURL
JavaSerializerSuite:
- JavaSerializer instances are serializable
- Deserialize object containing a primitive Class as attribute
LocalDirsSuite:
- Utils.getLocalDir() returns a valid directory, even if some local dirs are missing
- SPARK_LOCAL_DIRS override also affects driver
- Utils.getLocalDir() throws an exception if any temporary directory cannot be retrieved
TaskContextSuite:
- provide metrics sources
- calls TaskCompletionListener after failure
- calls TaskFailureListeners after failure
- all TaskCompletionListeners should be called even if some fail
- all TaskFailureListeners should be called even if some fail
- TaskContext.attemptNumber should return attempt number, not task id (SPARK-4014)
- TaskContext.stageAttemptNumber getter
- accumulators are updated on exception failures
- failed tasks collect only accumulators whose values count during failures
- only updated internal accumulators will be sent back to driver
- localProperties are propagated to executors correctly
- immediately call a completion listener if the context is completed
- immediately call a failure listener if the context has failed
- TaskCompletionListenerException.getMessage should include previousError
- all TaskCompletionListeners should be called even if some fail or a task
HistoryServerSuite:
- application list json
- completed app list json
- running app list json
- minDate app list json
- maxDate app list json
- maxDate2 app list json
- minEndDate app list json
- maxEndDate app list json
- minEndDate and maxEndDate app list json
- minDate and maxEndDate app list json
- limit app list json
- one app json
- one app multi-attempt json
- job list json
- job list from multi-attempt app json(1)
- job list from multi-attempt app json(2)
- one job json
- succeeded job list json
- succeeded&failed job list json
- executor list json
- executor list with executor metrics json
- stage list json
- complete stage list json
- failed stage list json
- one stage json
- one stage attempt json
- stage task summary w shuffle write
- stage task summary w shuffle read
- stage task summary w/ custom quantiles
- stage task list
- stage task list w/ offset & length
- stage task list w/ sortBy
- stage task list w/ sortBy short names: -runtime
- stage task list w/ sortBy short names: runtime
- stage list with accumulable json
- stage with accumulable json
- stage task list from multi-attempt app json(1)
- stage task list from multi-attempt app json(2)
- blacklisting for stage
- blacklisting node for stage
- rdd list storage json
- executor node blacklisting
- executor node blacklisting unblacklisting
- executor memory usage
- app environment
- download all logs for app with multiple attempts
- download one log for app with multiple attempts
- response codes on bad paths
- automatically retrieve uiRoot from request through Knox
- static relative links are prefixed with uiRoot (spark.ui.proxyBase)
- /version api endpoint
- ajax rendered relative links are prefixed with uiRoot (spark.ui.proxyBase)
- security manager starts with spark.authenticate set
- incomplete apps get refreshed
- ui and api authorization checks
NextIteratorSuite:
- one iteration
- two iterations
- empty iteration
- close is called once for empty iterations
- close is called once for non-empty iterations
ParallelCollectionSplitSuite:
- one element per slice
- one slice
- equal slices
- non-equal slices
- splitting exclusive range
- splitting inclusive range
- empty data
- zero slices
- negative number of slices
- exclusive ranges sliced into ranges
- inclusive ranges sliced into ranges
- identical slice sizes between Range and NumericRange
- identical slice sizes between List and NumericRange
- large ranges don't overflow
- random array tests
- random exclusive range tests
- random inclusive range tests
- exclusive ranges of longs
- inclusive ranges of longs
- exclusive ranges of doubles
- inclusive ranges of doubles
- inclusive ranges with Int.MaxValue and Int.MinValue
- empty ranges with Int.MaxValue and Int.MinValue
UISeleniumSuite:
- effects of unpersist() / persist() should be reflected
- failed stages should not appear to be active
- spark.ui.killEnabled should properly control kill button display
- jobs page should not display job group name unless some job was submitted in a job group
- job progress bars should handle stage / task failures
- job details page should display useful information for stages that haven't started
- job progress bars / cells reflect skipped stages / tasks
- stages that aren't run appear as 'skipped stages' after a job finishes
- jobs with stages that are skipped should show correct link descriptions on all jobs page
- attaching and detaching a new tab
- kill stage POST/GET response is correct
- kill job POST/GET response is correct
- stage & job retention
- live UI json application list
- job stages should have expected dotfile under DAG visualization
- stages page should show skipped stages
HadoopDelegationTokenManagerSuite:
- Correctly load default credential providers
- disable hive credential provider
- using deprecated configurations
- verify no credentials are obtained
- obtain tokens For HiveMetastore
- Obtain tokens For HBase
- SPARK-23209: obtain tokens when Hive classes are not available
RandomBlockReplicationPolicyBehavior:
- block replication - random block replication policy
ExecutorRunnerTest:
- command includes appId
EventLoggingListenerSuite:
- Verify log file exist
- Basic event logging
- Basic event logging with compression
- End-to-end event logging
- End-to-end event logging with compression
- Event logging with password redaction
- Log overwriting
- Event log name
- Executor metrics update
DriverRunnerTest:
- Process succeeds instantly
- Process failing several times and then succeeding
- Process doesn't restart if not supervised
- Process doesn't restart if killed
- Reset of backoff counter
- Kill process finalized with state KILLED
- Finalized with state FINISHED
- Finalized with state FAILED
- Handle exception starting process
PrefixComparatorsSuite:
- String prefix comparator
- Binary prefix comparator
- double prefix comparator handles NaNs properly
- double prefix comparator handles negative NaNs properly
- double prefix comparator handles other special values properly
NettyBlockTransferSecuritySuite:
- security default off
- security on same password
- security on mismatch password
- security mismatch auth off on server
- security mismatch auth off on client
- security with aes encryption
CommandUtilsSuite:
- set libraryPath correctly
- auth secret shouldn't appear in java opts
PairRDDFunctionsSuite:
- aggregateByKey
- groupByKey
- groupByKey with duplicates
- groupByKey with negative key hash codes
- groupByKey with many output partitions
- sampleByKey
- sampleByKeyExact
- reduceByKey
- reduceByKey with collectAsMap
- reduceByKey with many output partitions
- reduceByKey with partitioner
- countApproxDistinctByKey
- join
- join all-to-all
- leftOuterJoin
- cogroup with empty RDD
- cogroup with groupByed RDD having 0 partitions
- cogroup between multiple RDD with an order of magnitude difference in number of partitions
- cogroup between multiple RDD with number of partitions similar in order of magnitude
- cogroup between multiple RDD when defaultParallelism is set without proper partitioner
- cogroup between multiple RDD when defaultParallelism is set with proper partitioner
- cogroup between multiple RDD when defaultParallelism is set; with huge number of partitions in upstream RDDs
- rightOuterJoin
- fullOuterJoin
- join with no matches
- join with many output partitions
- groupWith
- groupWith3
- groupWith4
- zero-partition RDD
- keys and values
- default partitioner uses partition size
- default partitioner uses largest partitioner
- subtract
- subtract with narrow dependency
- subtractByKey
- subtractByKey with narrow dependency
- foldByKey
- foldByKey with mutable result type
- saveNewAPIHadoopFile should call setConf if format is configurable
- The JobId on the driver and executors should be the same during the commit
- saveAsHadoopFile should respect configured output committers
- failure callbacks should be called before calling writer.close() in saveNewAPIHadoopFile
- failure callbacks should be called before calling writer.close() in saveAsHadoopFile
- saveAsNewAPIHadoopDataset should support invalid output paths when there are no files to be committed to an absolute output location
- saveAsHadoopDataset should respect empty output directory when there are no files to be committed to an absolute output location
- lookup
- lookup with partitioner
- lookup with bad partitioner
RBackendSuite:
- close() clears jvmObjectTracker
PrimitiveVectorSuite:
- primitive value
- non-primitive value
- ideal growth
- ideal size
- resizing
MetricsConfigSuite:
- MetricsConfig with default properties
- MetricsConfig with properties set from a file
- MetricsConfig with properties set from a Spark configuration
- MetricsConfig with properties set from a file and a Spark configuration
- MetricsConfig with subProperties
PartiallySerializedBlockSuite:
- valuesIterator() and finishWritingToStream() cannot be called after discard() is called
- discard() can be called more than once
- cannot call valuesIterator() more than once
- cannot call finishWritingToStream() more than once
- cannot call finishWritingToStream() after valuesIterator()
- cannot call valuesIterator() after finishWritingToStream()
- buffers are deallocated in a TaskCompletionListener
- basic numbers with discard() and numBuffered = 50
- basic numbers with finishWritingToStream() and numBuffered = 50
- basic numbers with valuesIterator() and numBuffered = 50
- basic numbers with discard() and numBuffered = 0
- basic numbers with finishWritingToStream() and numBuffered = 0
- basic numbers with valuesIterator() and numBuffered = 0
- basic numbers with discard() and numBuffered = 1000
- basic numbers with finishWritingToStream() and numBuffered = 1000
- basic numbers with valuesIterator() and numBuffered = 1000
- case classes with discard() and numBuffered = 50
- case classes with finishWritingToStream() and numBuffered = 50
- case classes with valuesIterator() and numBuffered = 50
- case classes with discard() and numBuffered = 0
- case classes with finishWritingToStream() and numBuffered = 0
- case classes with valuesIterator() and numBuffered = 0
- case classes with discard() and numBuffered = 1000
- case classes with finishWritingToStream() and numBuffered = 1000
- case classes with valuesIterator() and numBuffered = 1000
- empty iterator with discard() and numBuffered = 0
- empty iterator with finishWritingToStream() and numBuffered = 0
- empty iterator with valuesIterator() and numBuffered = 0
SparkContextSchedulerCreationSuite:
- bad-master
- local
- local-*
- local-n
- local-*-n-failures
- local-n-failures
- bad-local-n
- bad-local-n-failures
- local-default-parallelism
- local-cluster
SerializationDebuggerSuite:
- primitives, strings, and nulls
- primitive arrays
- non-primitive arrays
- serializable object
- nested arrays
- nested objects
- cycles (should not loop forever)
- root object not serializable
- array containing not serializable element
- object containing not serializable field
- externalizable class writing out not serializable object
- externalizable class writing out serializable objects
- object containing writeReplace() which returns not serializable object
- object containing writeReplace() which returns serializable object
- no infinite loop with writeReplace() which returns class of its own type
- object containing writeObject() and not serializable field
- object containing writeObject() and serializable field
- object of serializable subclass with more fields than superclass (SPARK-7180)
- crazy nested objects
- improveException
- improveException with error in debugger
NettyRpcHandlerSuite:
- receive
- connectionTerminated
SamplingUtilsSuite:
- reservoirSampleAndCount
- SPARK-18678 reservoirSampleAndCount with tiny input
- computeFraction
TimeStampedHashMapSuite:
- HashMap - basic test
- TimeStampedHashMap - basic test
- TimeStampedHashMap - threading safety test
- TimeStampedHashMap - clearing by timestamp
RandomSamplerSuite:
- utilities
- sanity check medianKSD against references
- bernoulli sampling
- bernoulli sampling without iterator
- bernoulli sampling with gap sampling optimization
- bernoulli sampling (without iterator) with gap sampling optimization
- bernoulli boundary cases
- bernoulli (without iterator) boundary cases
- bernoulli data types
- bernoulli clone
- bernoulli set seed
- replacement sampling
- replacement sampling without iterator
- replacement sampling with gap sampling
- replacement sampling (without iterator) with gap sampling
- replacement boundary cases
- replacement (without) boundary cases
- replacement data types
- replacement clone
- replacement set seed
- bernoulli partitioning sampling
- bernoulli partitioning sampling without iterator
- bernoulli partitioning boundary cases
- bernoulli partitioning (without iterator) boundary cases
- bernoulli partitioning data
- bernoulli partitioning clone
ChunkedByteBufferOutputStreamSuite:
- empty output
- write a single byte
- write a single near boundary
- write a single at boundary
- single chunk output
- single chunk output at boundary size
- multiple chunk output
- multiple chunk output at boundary size
SparkSubmitUtilsSuite:
- incorrect maven coordinate throws error
- create repo resolvers
- create additional resolvers
:: loading settings :: url = jar:file:/home/jenkins/.m2/repository/org/apache/ivy/ivy/2.4.0/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
- add dependencies works correctly
- excludes works correctly
- ivy path works correctly
- search for artifact at local repositories
- dependency not found throws RuntimeException
- neglects Spark and Spark's dependencies
- exclude dependencies end to end
:: loading settings :: file = /home/jenkins/workspace/spark-master-test-maven-hadoop-2.6/core/target/tmp/ivy-5722ff72-cf6c-4940-bed3-54b0c24f7436/ivysettings.xml
- load ivy settings file
- SPARK-10878: test resolution files cleaned after resolving artifact
ImplicitOrderingSuite:
- basic inference of Orderings
TaskMetricsSuite:
- mutating values
- mutating shuffle read metrics values
- mutating shuffle write metrics values
- mutating input metrics values
- mutating output metrics values
- merging multiple shuffle read metrics
- additional accumulables
ExternalShuffleServiceSuite:
- groupByKey without compression
- shuffle non-zero block size
- shuffle serializer
- zero sized blocks
- zero sized blocks without kryo
- shuffle on mutable pairs
- sorting on mutable pairs
- cogroup using mutable pairs
- subtract mutable pairs
- sort with Java non serializable class - Kryo
- sort with Java non serializable class - Java
- shuffle with different compression settings (SPARK-3426)
- [SPARK-4085] rerun map stage if reduce stage cannot find its local shuffle file
- cannot find its local shuffle file if no execution of the stage and rerun shuffle
- metrics for shuffle without aggregation
- metrics for shuffle with aggregation
- multiple simultaneous attempts for one task (SPARK-8029)
- using external shuffle service
ClosureCleanerSuite:
- closures inside an object
- closures inside a class
- closures inside a class with no default constructor
- closures that don't use fields of the outer class
- nested closures inside an object
- nested closures inside a class
- toplevel return statements in closures are identified at cleaning time
- return statements from named functions nested in closures don't raise exceptions
- user provided closures are actually cleaned
- createNullValue
- SPARK-22328: ClosureCleaner misses referenced superclass fields: case 1
- SPARK-22328: ClosureCleaner misses referenced superclass fields: case 2
- SPARK-22328: multiple outer classes have the same parent class
UnpersistSuite:
- unpersist RDD
TaskSetManagerSuite:
- TaskSet with no preferences
- multiple offers with no preferences
- skip unsatisfiable locality levels
- basic delay scheduling
- we do not need to delay scheduling when we only have noPref tasks in the queue
- delay scheduling with fallback
- delay scheduling with failed hosts
- task result lost
- repeated failures lead to task set abortion
- executors should be blacklisted after task failure, in spite of locality preferences
- new executors get added and lost
- Executors exit for reason unrelated to currently running tasks
- test RACK_LOCAL tasks
- do not emit warning when serialized task is small
- emit warning when serialized task is large
- Not serializable exception thrown if the task cannot be serialized
- abort the job if total size of results is too large
Exception in thread "task-result-getter-3" java.lang.Error: java.lang.InterruptedException
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1148)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.InterruptedException
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
	at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:206)
	at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:222)
	at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:227)
	at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:220)
	at org.apache.spark.network.BlockTransferService.fetchBlockSync(BlockTransferService.scala:115)
	at org.apache.spark.storage.BlockManager.getRemoteBytes(BlockManager.scala:759)
	at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply$mcV$sp(TaskResultGetter.scala:82)
	at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply(TaskResultGetter.scala:63)
	at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply(TaskResultGetter.scala:63)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1945)
	at org.apache.spark.scheduler.TaskResultGetter$$anon$3.run(TaskResultGetter.scala:62)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	... 2 more
- [SPARK-13931] taskSetManager should not send Resubmitted tasks after being a zombie
- [SPARK-22074] Task killed by other attempt task should not be resubmitted
- speculative and noPref task should be scheduled after node-local
- node-local tasks should be scheduled right away when there are only node-local and no-preference tasks
- SPARK-4939: node-local tasks should be scheduled right after process-local tasks finished
- SPARK-4939: no-pref tasks should be scheduled after process-local tasks finished
- Ensure TaskSetManager is usable after addition of levels
- Test that locations with HDFSCacheTaskLocation are treated as PROCESS_LOCAL.
- Test TaskLocation for different host type.
- Kill other task attempts when one attempt belonging to the same task succeeds
- Killing speculative tasks does not count towards aborting the taskset
- SPARK-19868: DagScheduler only notified of taskEnd when state is ready
- SPARK-17894: Verify TaskSetManagers for different stage attempts have unique names
- don't update blacklist for shuffle-fetch failures, preemption, denied commits, or killed tasks
- update application blacklist for shuffle-fetch
- update blacklist before adding pending task to avoid race condition
- SPARK-21563 context's added jars shouldn't change mid-TaskSet
- [SPARK-24677] Avoid NoSuchElementException from MedianHeap
- SPARK-24755 Executor loss can cause task to not be resubmitted
- SPARK-13343 speculative tasks that didn't commit shouldn't be marked as success
BlockManagerBasicStrategyReplicationSuite:
- get peers with addition and removal of block managers
- block replication - 2x replication
- block replication - 3x replication
- block replication - mixed between 1x to 5x
- block replication - off-heap
- block replication - 2x replication without peers
- block replication - replication failures
- block replication - addition and deletion of block managers
RDDOperationGraphSuite:
- Test simple cluster equals
ShuffleExternalSorterSuite:
- nested spill should be no-op
ChunkedByteBufferSuite:
- no chunks
- getChunks() duplicates chunks
- copy() does not affect original buffer's position
- writeFully() does not affect original buffer's position
- SPARK-24107: writeFully() write buffer which is larger than bufferWriteChunkSize
- toArray()
- toArray() throws UnsupportedOperationException if size exceeds 2GB
- toInputStream()
HistoryServerDiskManagerSuite:
- leasing space
- tracking active stores
- approximate size heuristic
PythonBroadcastSuite:
- PythonBroadcast can be serialized with Kryo (SPARK-4882)
NettyBlockTransferServiceSuite:
- can bind to a random port
- can bind to two random ports
- can bind to a specific port
- can bind to a specific port twice and the second increments
BasicSchedulerIntegrationSuite:
- super simple job
- multi-stage job
- job with fetch failure
- job failure after 4 attempts
JobWaiterSuite:
- call jobFailed multiple times
RDDBarrierSuite:
- create an RDDBarrier
- create an RDDBarrier in the middle of a chain of RDDs
- RDDBarrier with shuffle
UninterruptibleThreadSuite:
- interrupt when runUninterruptibly is running
- interrupt before runUninterruptibly runs
- nested runUninterruptibly
- stress test
DriverSuite:
- driver should exit after finishing without cleanup (SPARK-530) !!! IGNORED !!!
CompactBufferSuite:
- empty buffer
- basic inserts
- adding sequences
- adding the same buffer to itself
MapStatusSuite:
- compressSize
- decompressSize
- MapStatus should never report non-empty blocks' sizes as 0
- large tasks should use org.apache.spark.scheduler.HighlyCompressedMapStatus
- HighlyCompressedMapStatus: estimated size should be the average non-empty block size
- SPARK-22540: ensure HighlyCompressedMapStatus calculates correct avgSize
- RoaringBitmap: runOptimize succeeded
- RoaringBitmap: runOptimize failed
- Blocks which are bigger than SHUFFLE_ACCURATE_BLOCK_THRESHOLD should not be underestimated.
- SPARK-21133 HighlyCompressedMapStatus#writeExternal throws NPE
BlockInfoManagerSuite:
- initial memory usage
- get non-existent block
- basic lockNewBlockForWriting
- lockNewBlockForWriting blocks while write lock is held, then returns false after release
- lockNewBlockForWriting blocks while write lock is held, then returns true after removal
- read locks are reentrant
- multiple tasks can hold read locks
- single task can hold write lock
- cannot grab a writer lock while already holding a write lock
- assertBlockIsLockedForWriting throws exception if block is not locked
- downgrade lock
- write lock will block readers
- read locks will block writer
- removing a non-existent block throws IllegalArgumentException
- removing a block without holding any locks throws IllegalStateException
- removing a block while holding only a read lock throws IllegalStateException
- removing a block causes blocked callers to receive None
- releaseAllLocksForTask releases write locks
StoragePageSuite:
- rddTable
- empty rddTable
- streamBlockStorageLevelDescriptionAndSize
- receiverBlockTables
- empty receiverBlockTables
TaskSchedulerImplSuite:
- Scheduler does not always schedule tasks on the same workers
- Scheduler correctly accounts for multiple CPUs per task
- Scheduler does not crash when tasks are not serializable
- refuse to schedule concurrent attempts for the same stage (SPARK-8103)
- don't schedule more tasks after a taskset is zombie
- if a zombie attempt finishes, continue scheduling tasks for non-zombie attempts
- tasks are not re-scheduled while executor loss reason is pending
- scheduled tasks obey task and stage blacklists
- scheduled tasks obey node and executor blacklists
- abort stage when all executors are blacklisted
- Blacklisted node for entire task set prevents per-task blacklist checks: iteration 0
- Blacklisted node for entire task set prevents per-task blacklist checks: iteration 1
- Blacklisted node for entire task set prevents per-task blacklist checks: iteration 2
- Blacklisted node for entire task set prevents per-task blacklist checks: iteration 3
- Blacklisted node for entire task set prevents per-task blacklist checks: iteration 4
- Blacklisted node for entire task set prevents per-task blacklist checks: iteration 5
- Blacklisted node for entire task set prevents per-task blacklist checks: iteration 6
- Blacklisted node for entire task set prevents per-task blacklist checks: iteration 7
- Blacklisted node for entire task set prevents per-task blacklist checks: iteration 8
- Blacklisted node for entire task set prevents per-task blacklist checks: iteration 9
- Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 0
- Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 1
- Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 2
- Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 3
- Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 4
- Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 5
- Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 6
- Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 7
- Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 8
- Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 9
- abort stage if executor loss results in unschedulability from previously failed tasks
- don't abort if there is an executor available, though it hasn't had scheduled tasks yet
- SPARK-16106 locality levels updated if executor added to existing host
- scheduler checks for executors that can be expired from blacklist
- if an executor is lost then the state for its running tasks is cleaned up (SPARK-18553)
- if a task finishes with TaskState.LOST its executor is marked as dead
- Locality should be used for bulk offers even with delay scheduling off
- With delay scheduling off, tasks can be run at any locality level immediately
- TaskScheduler should throw IllegalArgumentException when schedulingMode is not supported
- Completions in zombie tasksets update status of non-zombie taskset
- don't schedule for a barrier taskSet if available slots are less than pending tasks
- schedule tasks for a barrier taskSet if all tasks can be launched together
- cancelTasks shall kill all the running tasks and fail the stage
- killAllTaskAttempts shall kill all the running tasks and not fail the stage
- mark taskset for a barrier stage as zombie in case a task fails
SparkConfSuite:
- Test byteString conversion
- Test timeString conversion
- loading from system properties
- initializing without loading defaults
- named set methods
- basic get and set
- creating SparkContext without master and app name
- creating SparkContext without master
- creating SparkContext without app name
- creating SparkContext with both master and app name
- SparkContext property overriding
- nested property names
- Thread safeness - SPARK-5425
- register kryo classes through registerKryoClasses
- register kryo classes through registerKryoClasses and custom registrator
- register kryo classes through conf
- deprecated configs
- akka deprecated configs
- SPARK-13727
- SPARK-17240: SparkConf should be serializable (java)
- SPARK-17240: SparkConf should be serializable (kryo)
- encryption requires authentication
- spark.network.timeout should bigger than spark.executor.heartbeatInterval
- SPARK-24337: getSizeAsKb with default throws an useful error message with key name
- SPARK-24337: getTimeAsMs throws an useful error message with key name
- SPARK-24337: getTimeAsSeconds throws an useful error message with key name
- SPARK-24337: getTimeAsSeconds with default throws an useful error message with key name
- SPARK-24337: getSizeAsBytes with default long throws an useful error message with key name
- SPARK-24337: getSizeAsMb throws an useful error message with key name
- SPARK-24337: getSizeAsGb throws an useful error message with key name
- SPARK-24337: getSizeAsBytes with default string throws an useful error message with key name
- SPARK-24337: getDouble throws an useful error message with key name
- SPARK-24337: getTimeAsMs with default throws an useful error message with key name
- SPARK-24337: getSizeAsBytes throws an useful error message with key name
- SPARK-24337: getSizeAsGb with default throws an useful error message with key name
- SPARK-24337: getInt throws an useful error message with key name
- SPARK-24337: getSizeAsMb with default throws an useful error message with key name
- SPARK-24337: getSizeAsKb throws an useful error message with key name
- SPARK-24337: getBoolean throws an useful error message with key name
- SPARK-24337: getLong throws an useful error message with key name
ShuffleBlockFetcherIteratorSuite:
- successful 3 local reads + 2 remote reads
- release current unexhausted buffer in case the task completes early
- fail all blocks if any of the remote request fails
- retry corrupt blocks
- big blocks are not checked for corruption
- retry corrupt blocks (disabled)
- Blocks should be shuffled to disk when size of the request is above the threshold(maxReqSizeShuffleToMem).
- fail zero-size blocks
ConfigEntrySuite:
- conf entry: int
- conf entry: long
- conf entry: double
- conf entry: boolean
- conf entry: optional
- conf entry: fallback
- conf entry: time
- conf entry: bytes
- conf entry: regex
- conf entry: string seq
- conf entry: int seq
- conf entry: transformation
- conf entry: checkValue()
- conf entry: valid values check
- conf entry: conversion error
- default value handling is null-safe
- variable expansion of spark config entries
- conf entry : default function
- conf entry: alternative keys
- onCreate
WorkerSuite:
- test isUseLocalNodeSSLConfig
- test maybeUpdateSSLSettings
- test clearing of finishedExecutors (small number of executors)
- test clearing of finishedExecutors (more executors)
- test clearing of finishedDrivers (small number of drivers)
- test clearing of finishedDrivers (more drivers)
- cleanup non-shuffle files after executor exits when config spark.storage.cleanupFilesAfterExecutorExit=true
- don't cleanup non-shuffle files after executor exits when config spark.storage.cleanupFilesAfterExecutorExit=false
BlockManagerSuite:
- StorageLevel object caching
- BlockManagerId object caching
- BlockManagerId.isDriver() backwards-compatibility with legacy driver ids (SPARK-6716)
- master + 1 manager interaction
- master + 2 managers interaction
- removing block
- removing rdd
- removing broadcast
- reregistration on heart beat
- reregistration on block update
- reregistration doesn't dead lock
- correct BlockResult returned from get() calls
- optimize a location order of blocks without topology information
- optimize a location order of blocks with topology information
- SPARK-9591: getRemoteBytes from another location when Exception throw
- SPARK-14252: getOrElseUpdate should still read from remote storage
- in-memory LRU storage
- in-memory LRU storage with serialization
- in-memory LRU storage with off-heap
- in-memory LRU for partitions of same RDD
- in-memory LRU for partitions of multiple RDDs
- on-disk storage (encryption = off)
- on-disk storage (encryption = on)
- disk and memory storage (encryption = off)
- disk and memory storage (encryption = on)
- disk and memory storage with getLocalBytes (encryption = off)
- disk and memory storage with getLocalBytes (encryption = on)
- disk and memory storage with serialization (encryption = off)
- disk and memory storage with serialization (encryption = on)
- disk and memory storage with serialization and getLocalBytes (encryption = off)
- disk and memory storage with serialization and getLocalBytes (encryption = on)
- disk and off-heap memory storage (encryption = off)
- disk and off-heap memory storage (encryption = on)
- disk and off-heap memory storage with getLocalBytes (encryption = off)
- disk and off-heap memory storage with getLocalBytes (encryption = on)
- LRU with mixed storage levels (encryption = off)
- LRU with mixed storage levels (encryption = on)
- in-memory LRU with streams (encryption = off)
- in-memory LRU with streams (encryption = on)
- LRU with mixed storage levels and streams (encryption = off)
- LRU with mixed storage levels and streams (encryption = on)
- negative byte values in ByteBufferInputStream
- overly large block
- block compression
- block store put failure
- turn off updated block statuses
- updated block statuses
- query block statuses
- get matching blocks
- SPARK-1194 regression: fix the same-RDD rule for cache replacement
- safely unroll blocks through putIterator (disk)
- read-locked blocks cannot be evicted from memory
- remove block if a read fails due to missing DiskStore files (SPARK-15736)
- SPARK-13328: refresh block locations (fetch should fail after hitting a threshold)
- SPARK-13328: refresh block locations (fetch should succeed after location refresh)
- SPARK-17484: block status is properly updated following an exception in put()
- SPARK-17484: master block locations are updated following an invalid remote block fetch
- SPARK-20640: Shuffle registration timeout and maxAttempts conf are working
- fetch remote block to local disk if block size is larger than threshold
- query locations of blockIds
PythonRunnerSuite:
- format path
- format paths
CryptoStreamUtilsSuite:
- crypto configuration conversion
- shuffle encryption key length should be 128 by default
- create 256-bit key
- create key with invalid length
- serializer manager integration
- encryption key propagation to executors
- crypto stream wrappers
- error handling wrapper
StatsdSinkSuite:
- metrics StatsD sink with Counter
- metrics StatsD sink with Gauge
- metrics StatsD sink with Histogram
- metrics StatsD sink with Timer
FileCommitProtocolInstantiationSuite:
- Dynamic partitions require appropriate constructor
- Standard partitions work with classic constructor
- Three arg constructors have priority
- Three arg constructors have priority when dynamic
- The protocol must be of the correct class
- If there is no matching constructor, class hierarchy is irrelevant
CompletionIteratorSuite:
- basic test
LauncherBackendSuite:
- local: launcher handle
- standalone/client: launcher handle
LogPageSuite:
- get logs simple
UnifiedMemoryManagerSuite:
- single task requesting on-heap execution memory
- two tasks requesting full on-heap execution memory
- two tasks cannot grow past 1 / N of on-heap execution memory
- tasks can block to get at least 1 / 2N of on-heap execution memory
- TaskMemoryManager.cleanUpAllAllocatedMemory
- tasks should not be granted a negative amount of execution memory
- off-heap execution allocations cannot exceed limit
- basic execution memory
- basic storage memory
- execution evicts storage
- execution memory requests smaller than free memory should evict storage (SPARK-12165)
- storage does not evict execution
- small heap
- insufficient executor memory
- execution can evict cached blocks when there are multiple active tasks (SPARK-12155)
- SPARK-15260: atomically resize memory pools
- not enough free memory in the storage pool --OFF_HEAP
UnsafeKryoSerializerSuite:
- SPARK-7392 configuration limits
- basic types
- pairs
- Scala data structures
- Bug: SPARK-10251
- ranges
- asJavaIterable
- custom registrator
- kryo with collect
- kryo with parallelize
- kryo with parallelize for specialized tuples
- kryo with parallelize for primitive arrays
- kryo with collect for specialized tuples
- kryo with SerializableHyperLogLog
- kryo with reduce
- kryo with fold
- kryo with nonexistent custom registrator should fail
- default class loader can be set by a different thread
- registration of HighlyCompressedMapStatus
- serialization buffer overflow reporting
- SPARK-12222: deserialize RoaringBitmap throw Buffer underflow exception
- KryoOutputObjectOutputBridge.writeObject and KryoInputObjectInputBridge.readObject
- getAutoReset
- SPARK-25176 ClassCastException when writing a Map after previously reading a Map with different generic type
- instance reuse with autoReset = true, referenceTracking = true
- instance reuse with autoReset = false, referenceTracking = true
- instance reuse with autoReset = true, referenceTracking = false
- instance reuse with autoReset = false, referenceTracking = false
NettyRpcAddressSuite:
- toString
- toString for client mode
BitSetSuite:
- basic set and get
- 100% full bit set
- nextSetBit
- xor len(bitsetX) < len(bitsetY)
- xor len(bitsetX) > len(bitsetY)
- andNot len(bitsetX) < len(bitsetY)
- andNot len(bitsetX) > len(bitsetY)
- [gs]etUntil
AsyncRDDActionsSuite:
- countAsync
- collectAsync
- foreachAsync
- foreachPartitionAsync
- takeAsync
- async success handling
- async failure handling
- FutureAction result, infinite wait
- FutureAction result, finite wait
- FutureAction result, timeout
- SimpleFutureAction callback must not consume a thread while waiting
- ComplexFutureAction callback must not consume a thread while waiting
StagePageSuite:
- ApiHelper.COLUMN_TO_INDEX should match headers of the task table
- peak execution memory should displayed
- SPARK-10543: peak execution memory should be per-task rather than cumulative
BarrierStageOnSubmittedSuite:
- submit a barrier ResultStage that contains PartitionPruningRDD
- submit a barrier ShuffleMapStage that contains PartitionPruningRDD
- submit a barrier stage that doesn't contain PartitionPruningRDD
- submit a barrier stage with partial partitions
- submit a barrier stage with union()
- submit a barrier stage with coalesce()
- submit a barrier stage that contains an RDD that depends on multiple barrier RDDs
- submit a barrier stage with zip()
- submit a barrier ResultStage with dynamic resource allocation enabled
- submit a barrier ShuffleMapStage with dynamic resource allocation enabled
- submit a barrier ResultStage that requires more slots than current total under local mode
- submit a barrier ShuffleMapStage that requires more slots than current total under local mode
- submit a barrier ResultStage that requires more slots than current total under local-cluster mode
- submit a barrier ShuffleMapStage that requires more slots than current total under local-cluster mode
HistoryServerArgumentsSuite:
- No Arguments Parsing
- Properties File Arguments Parsing --properties-file
MetricsSystemSuite:
- MetricsSystem with default config
- MetricsSystem with sources add
- MetricsSystem with Driver instance
- MetricsSystem with Driver instance and spark.app.id is not set
- MetricsSystem with Driver instance and spark.executor.id is not set
- MetricsSystem with Executor instance
- MetricsSystem with Executor instance and spark.app.id is not set
- MetricsSystem with Executor instance and spark.executor.id is not set
- MetricsSystem with instance which is neither Driver nor Executor
- MetricsSystem with Executor instance, with custom namespace
- MetricsSystem with Executor instance, custom namespace which is not set
- MetricsSystem with Executor instance, custom namespace, spark.executor.id not set
- MetricsSystem with non-driver, non-executor instance with custom namespace
JobCancellationSuite:
- local mode, FIFO scheduler
- local mode, fair scheduler
- cluster mode, FIFO scheduler
- cluster mode, fair scheduler
- do not put partially executed partitions into cache
- job group
- inherited job group (SPARK-6629)
- job group with interruption
- task reaper kills JVM if killed tasks keep running for too long
- task reaper will not kill JVM if spark.task.killTimeout == -1
- two jobs sharing the same stage
- interruptible iterator of shuffle reader
PartitioningSuite:
- HashPartitioner equality
- RangePartitioner equality
- RangePartitioner getPartition
- RangePartitioner for keys that are not Comparable (but with Ordering)
- RangPartitioner.sketch
- RangePartitioner.determineBounds
- RangePartitioner should run only one job if data is roughly balanced
- RangePartitioner should work well on unbalanced data
- RangePartitioner should return a single partition for empty RDDs
- HashPartitioner not equal to RangePartitioner
- partitioner preservation
- partitioning Java arrays should fail
- zero-length partitions should be correctly handled
- Number of elements in RDD is less than number of partitions
- defaultPartitioner
- defaultPartitioner when defaultParallelism is set
SecurityManagerSuite:
- set security with conf
- set security with conf for groups
- set security with api
- set security with api for groups
- set security modify acls
- set security modify acls for groups
- set security admin acls
- set security admin acls for groups
- set security with * in acls
- set security with * in acls for groups
- security for groups default behavior
- missing secret authentication key
- secret authentication key
- secret key generation
UISuite:
- basic ui visibility !!! IGNORED !!!
- visibility at localhost:4040 !!! IGNORED !!!
- jetty selects different port under contention
- jetty with https selects different port under contention
- jetty binds to port 0 correctly
- jetty with https binds to port 0 correctly
- verify webUrl contains the scheme
- verify webUrl contains the port
- verify proxy rewrittenURI
- verify rewriting location header for reverse proxy
- http -> https redirect applies to all URIs
- specify both http and https ports separately
SSLOptionsSuite:
- test resolving property file as spark conf 
- test resolving property with defaults specified 
- test whether defaults can be overridden 
- variable substitution
- get password from Hadoop credential provider
SparkListenerWithClusterSuite:
- SparkListener sends executor added message
InputOutputMetricsSuite:
- input metrics for old hadoop with coalesce
- input metrics with cache and coalesce
- input metrics for new Hadoop API with coalesce
- input metrics when reading text file
- input metrics on records read - simple
- input metrics on records read - more stages
- input metrics on records - New Hadoop API
- input metrics on records read with cache
- input read/write and shuffle read/write metrics all line up
- input metrics with interleaved reads
- output metrics on records written
- output metrics on records written - new Hadoop API
- output metrics when writing text file
- input metrics with old CombineFileInputFormat
- input metrics with new CombineFileInputFormat
- input metrics with old Hadoop API in different thread
- input metrics with new Hadoop API in different thread
OutputCommitCoordinatorIntegrationSuite:
- exception thrown in OutputCommitter.commitTask()
StandaloneRestSubmitSuite:
- construct submit request
- create submission
- create submission from main method
- kill submission
- request submission status
- create then kill
- create then request status
- create then kill then request status
- kill or request status before create
- good request paths
- good request paths, bad requests
- bad request paths
- server returns unknown fields
- client handles faulty server
- client does not send 'SPARK_ENV_LOADED' env var by default
- client includes mesos env vars
OutputCommitCoordinatorSuite:
- Only one of two duplicate commit tasks should commit
- If commit fails, if task is retried it should not be locked, and will succeed.
- Job should not complete if all commits are denied
- Only authorized committer failures can clear the authorized committer lock (SPARK-6614)
- SPARK-19631: Do not allow failed attempts to be authorized for committing
- SPARK-24589: Differentiate tasks from different stage attempts
- SPARK-24589: Make sure stage state is cleaned up
SortShuffleSuite:
- groupByKey without compression
- shuffle non-zero block size
- shuffle serializer
- zero sized blocks
- zero sized blocks without kryo
- shuffle on mutable pairs
- sorting on mutable pairs
- cogroup using mutable pairs
- subtract mutable pairs
- sort with Java non serializable class - Kryo
- sort with Java non serializable class - Java
- shuffle with different compression settings (SPARK-3426)
- [SPARK-4085] rerun map stage if reduce stage cannot find its local shuffle file
- cannot find its local shuffle file if no execution of the stage and rerun shuffle
- metrics for shuffle without aggregation
- metrics for shuffle with aggregation
- multiple simultaneous attempts for one task (SPARK-8029)
- SortShuffleManager properly cleans up files for shuffles that use the serialized path
- SortShuffleManager properly cleans up files for shuffles that use the deserialized path
SumEvaluatorSuite:
- correct handling of count 1
- correct handling of count 0
- correct handling of NaN
- correct handling of > 1 values
- test count > 1
MapOutputTrackerSuite:
- master start and stop
- master register shuffle and fetch
- master register and unregister shuffle
- master register shuffle and unregister map output and fetch
- remote fetch
- remote fetch below max RPC message size
- min broadcast size exceeds max RPC message size
- getLocationsWithLargestOutputs with multiple outputs in same machine
- remote fetch using broadcast
- equally divide map statistics tasks
- zero-sized blocks should be excluded when getMapSizesByExecutorId
WholeTextFileInputFormatSuite:
- for small files minimum split size per node and per rack should be less than or equal to maximum split size.
BlockManagerProactiveReplicationSuite:
- get peers with addition and removal of block managers
- block replication - 2x replication
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@24eddb0 rejected from java.util.concurrent.ThreadPoolExecutor@6714921[Shutting down, pool size = 6, active threads = 1, queued tasks = 0, completed tasks = 7]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
- block replication - 3x replication
- block replication - mixed between 1x to 5x
- block replication - off-heap
- block replication - 2x replication without peers
- block replication - replication failures
- block replication - addition and deletion of block managers
- proactive block replication - 2 replicas - 1 block manager deletions
- proactive block replication - 3 replicas - 2 block manager deletions
- proactive block replication - 4 replicas - 3 block manager deletions
- proactive block replication - 5 replicas - 4 block manager deletions
SparkListenerSuite:
- don't call sc.stop in listener
- basic creation and shutdown of LiveListenerBus
- bus.stop() waits for the event queue to completely drain
- metrics for dropped listener events
- basic creation of StageInfo
- basic creation of StageInfo with shuffle
- StageInfo with fewer tasks than partitions
- local metrics
- onTaskGettingResult() called when result fetched remotely
- onTaskGettingResult() not called when result sent directly
- onTaskEnd() should be called for all started tasks, even after job has been killed
- SparkListener moves on if a listener throws an exception
- registering listeners via spark.extraListeners
- add and remove listeners to/from LiveListenerBus queues
- interrupt within listener is handled correctly: throw interrupt
- interrupt within listener is handled correctly: set Thread interrupted
VersionUtilsSuite:
- Parse Spark major version
- Parse Spark minor version
- Parse Spark major and minor versions
SizeTrackerSuite:
- vector fixed size insertions
- vector variable size insertions
- map fixed size insertions
- map variable size insertions
- map updates
SortShuffleManagerSuite:
- supported shuffle dependencies for serialized shuffle
- unsupported shuffle dependencies for serialized shuffle
KryoSerializerAutoResetDisabledSuite:
- sort-shuffle with bypassMergeSort (SPARK-7873)
- calling deserialize() after deserializeStream()
CompressionCodecSuite:
- default compression codec
- lz4 compression codec
- lz4 compression codec short form
- lz4 supports concatenation of serialized streams
- lzf compression codec
- lzf compression codec short form
- lzf supports concatenation of serialized streams
- snappy compression codec
- snappy compression codec short form
- snappy supports concatenation of serialized streams
- zstd compression codec
- zstd compression codec short form
- zstd supports concatenation of serialized zstd
- bad compression codec
ChunkedByteBufferFileRegionSuite:
- transferTo can stop and resume correctly
- transfer to with random limits
XORShiftRandomSuite:
- XORShift generates valid random numbers
- XORShift with zero seed
- hashSeed has random bits throughout
CoarseGrainedSchedulerBackendSuite:
- serialized task larger than max RPC message size
- compute max number of concurrent tasks can be launched
- compute max number of concurrent tasks can be launched when spark.task.cpus > 1 *** FAILED ***
  The code passed to eventually never returned normally. Attempted 639 times over 10.001449698999998 seconds. Last failure message: ArrayBuffer() had length 0 instead of expected length 4. (CoarseGrainedSchedulerBackendSuite.scala:67)
- compute max number of concurrent tasks can be launched when some executors are busy *** FAILED ***
  The code passed to eventually never returned normally. Attempted 644 times over 10.013590047 seconds. Last failure message: ArrayBuffer() had length 0 instead of expected length 4. (CoarseGrainedSchedulerBackendSuite.scala:99)
AppendOnlyMapSuite:
- initialization
- object keys and values
- primitive keys and values
- null keys
- null values
- changeValue
- inserting in capacity-1 map
- destructive sort
ConfigReaderSuite:
- variable expansion
- circular references
- spark conf provider filters config keys
ThreadUtilsSuite:
- newDaemonSingleThreadExecutor
- newDaemonSingleThreadScheduledExecutor
- newDaemonCachedThreadPool
- sameThread
- runInNewThread
Exception in thread "test-ForkJoinPool-3-worker-3" Exception in thread "test-ForkJoinPool-3-worker-1" java.lang.InterruptedException: sleep interrupted
- parmap should be interruptible
	at java.lang.Thread.sleep(Native Method)
	at org.apache.spark.util.ThreadUtilsSuite$$anonfun$11$$anon$1$$anonfun$run$1.apply(ThreadUtilsSuite.scala:152)
	at org.apache.spark.util.ThreadUtilsSuite$$anonfun$11$$anon$1$$anonfun$run$1.apply(ThreadUtilsSuite.scala:151)
	at org.apache.spark.util.ThreadUtils$$anonfun$3$$anonfun$apply$1.apply(ThreadUtils.scala:287)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
	at scala.concurrent.impl.ExecutionContextImpl$AdaptedForkJoinTask.exec(ExecutionContextImpl.scala:121)
	at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
	at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
	at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
	at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
java.lang.InterruptedException: sleep interrupted
	at java.lang.Thread.sleep(Native Method)
	at org.apache.spark.util.ThreadUtilsSuite$$anonfun$11$$anon$1$$anonfun$run$1.apply(ThreadUtilsSuite.scala:152)
	at org.apache.spark.util.ThreadUtilsSuite$$anonfun$11$$anon$1$$anonfun$run$1.apply(ThreadUtilsSuite.scala:151)
	at org.apache.spark.util.ThreadUtils$$anonfun$3$$anonfun$apply$1.apply(ThreadUtils.scala:287)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
	at scala.concurrent.impl.ExecutionContextImpl$AdaptedForkJoinTask.exec(ExecutionContextImpl.scala:121)
	at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
	at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
	at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
	at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
SocketAuthHelperSuite:
- successful auth
- failed auth
RDDOperationScopeSuite:
- equals and hashCode
- getAllScopes
- json de/serialization
- withScope
- withScope with partial nesting
- withScope with multiple layers of nesting
KryoSerializerDistributedSuite:
- kryo objects are serialised consistently in different processes
OpenHashMapSuite:
- size for specialized, primitive value (int)
- initialization
- primitive value
- non-primitive value
- null keys
- null values
- changeValue
- inserting in capacity-1 map
- contains
- distinguish between the 0/0.0/0L and null
OpenHashSetSuite:
- size for specialized, primitive int
- primitive int
- primitive long
- primitive float
- primitive double
- non-primitive
- non-primitive set growth
- primitive set growth
- SPARK-18200 Support zero as an initial set size
- support for more than 12M items
AccumulatorSuite:
- accumulator serialization
- basic accumulation
- value not assignable from tasks
- add value to collection accumulators
- value not readable in tasks
- collection accumulators
- localValue readable in tasks
- garbage collection
- get accum
- string accumulator param
SparkContextInfoSuite:
- getPersistentRDDs only returns RDDs that are marked as cached
- getPersistentRDDs returns an immutable map
- getRDDStorageInfo only reports on RDDs that actually persist data
- call sites report correct locations
ExecutorAllocationManagerSuite:
- verify min/max executors
- starting state
- add executors
- executionAllocationRatio is correctly handled
- add executors capped by num pending tasks
- add executors when speculative tasks added
- ignore task end events from completed stages
- cancel pending executors when no longer needed
- remove executors
- remove multiple executors
- Removing with various numExecutorsTarget condition
- interleaving add and remove
- starting/canceling add timer
- starting/canceling remove timers
- mock polling loop with no events
- mock polling loop add behavior
- mock polling loop remove behavior
- listeners trigger add executors correctly
- listeners trigger remove executors correctly
- listeners trigger add and remove executor callbacks correctly
- SPARK-4951: call onTaskStart before onBlockManagerAdded
- SPARK-4951: onExecutorAdded should not add a busy executor to removeTimes
- avoid ramp up when target < running executors
- avoid ramp down initial executors until first job is submitted
- avoid ramp down initial executors until idle executor is timeout
- get pending task number and related locality preference
- SPARK-8366: maxNumExecutorsNeeded should properly handle failed tasks
- reset the state of allocation manager
- SPARK-23365 Don't update target num executors when killing idle executors
MemoryStoreSuite:
- reserve/release unroll memory
- safely unroll blocks
- safely unroll blocks through putIteratorAsValues
- safely unroll blocks through putIteratorAsBytes
- PartiallySerializedBlock.valuesIterator
- PartiallySerializedBlock.finishWritingToStream
- multiple unrolls by the same thread
- lazily create a big ByteBuffer to avoid OOM if it cannot be put into MemoryStore
- put a small ByteBuffer to MemoryStore
- SPARK-22083: Release all locks in evictBlocksToFreeSpace
StaticMemoryManagerSuite:
- single task requesting on-heap execution memory
- two tasks requesting full on-heap execution memory
- two tasks cannot grow past 1 / N of on-heap execution memory
- tasks can block to get at least 1 / 2N of on-heap execution memory
- TaskMemoryManager.cleanUpAllAllocatedMemory
- tasks should not be granted a negative amount of execution memory
- off-heap execution allocations cannot exceed limit
- basic execution memory
- basic storage memory
- execution and storage isolation
- unroll memory
SparkSubmitSuite:
- prints usage on empty input
- prints usage with only --help
- prints error with unrecognized options
- handle binary specified but not class
- handles arguments with --key=val
- handles arguments to user program
- handles arguments to user program with name collision
- print the right queue name
- SPARK-24241: do not fail fast if executor num is 0 when dynamic allocation is enabled
- specify deploy mode through configuration
- handles YARN cluster mode
- handles YARN client mode
- handles standalone cluster mode
- handles legacy standalone cluster mode
- handles standalone client mode
- handles mesos client mode
- handles k8s cluster mode
- handles confs with flag equivalents
- SPARK-21568 ConsoleProgressBar should be enabled only in shells
- launch simple application with spark-submit
- launch simple application with spark-submit with redaction
- includes jars passed in through --jars
- includes jars passed in through --packages
- includes jars passed through spark.jars.packages and spark.jars.repositories
- correctly builds R packages included in a jar with --packages !!! IGNORED !!!
- include an external JAR in SparkR !!! CANCELED !!!
  org.apache.spark.api.r.RUtils.isSparkRInstalled was false SparkR is not installed in this build. (SparkSubmitSuite.scala:611)
- resolves command line argument paths correctly
- ambiguous archive mapping results in error message
- resolves config paths correctly
- user classpath first in driver
- SPARK_CONF_DIR overrides spark-defaults.conf
- support glob path
- downloadFile - invalid url
- downloadFile - file doesn't exist
- downloadFile does not download local file
- download one file to local
- download list of files to local
- Avoid re-upload remote resources in yarn client mode
- download remote resource if it is not supported by yarn service
- avoid downloading remote resource if it is supported by yarn service
- force download from blacklisted schemes
- force download for all the schemes
- start SparkApplication without modifying system properties
- support --py-files/spark.submit.pyFiles in non pyspark application
- handles natural line delimiters in --properties-file and --conf uniformly
RPackageUtilsSuite:
- pick which jars to unpack using the manifest
- build an R package from a jar end to end
- jars that don't exist are skipped and print warning
- faulty R package shows documentation
- jars without manifest return false
- SparkR zipping works properly
TaskDescriptionSuite:
- encoding and then decoding a TaskDescription results in the same TaskDescription
MeanEvaluatorSuite:
- test count 0
- test count 1
- test count > 1
TopologyMapperSuite:
- File based Topology Mapper
ShuffleNettySuite:
- groupByKey without compression
- shuffle non-zero block size
- shuffle serializer
- zero sized blocks
- zero sized blocks without kryo
- shuffle on mutable pairs
- sorting on mutable pairs
- cogroup using mutable pairs
- subtract mutable pairs
- sort with Java non serializable class - Kryo
- sort with Java non serializable class - Java
- shuffle with different compression settings (SPARK-3426)
- [SPARK-4085] rerun map stage if reduce stage cannot find its local shuffle file
- cannot find its local shuffle file if no execution of the stage and rerun shuffle
- metrics for shuffle without aggregation
- metrics for shuffle with aggregation
- multiple simultaneous attempts for one task (SPARK-8029)
CountEvaluatorSuite:
- test count 0
- test count >= 1
KryoSerializerSuite:
- SPARK-7392 configuration limits
- basic types
- pairs
- Scala data structures
- Bug: SPARK-10251
- ranges
- asJavaIterable
- custom registrator
- kryo with collect
- kryo with parallelize
- kryo with parallelize for specialized tuples
- kryo with parallelize for primitive arrays
- kryo with collect for specialized tuples
- kryo with SerializableHyperLogLog
- kryo with reduce
- kryo with fold
- kryo with nonexistent custom registrator should fail
- default class loader can be set by a different thread
- registration of HighlyCompressedMapStatus
- serialization buffer overflow reporting
- SPARK-12222: deserialize RoaringBitmap throw Buffer underflow exception
- KryoOutputObjectOutputBridge.writeObject and KryoInputObjectInputBridge.readObject
- getAutoReset
- SPARK-25176 ClassCastException when writing a Map after previously reading a Map with different generic type
- instance reuse with autoReset = true, referenceTracking = true
- instance reuse with autoReset = false, referenceTracking = true
- instance reuse with autoReset = true, referenceTracking = false
- instance reuse with autoReset = false, referenceTracking = false
BlacklistTrackerSuite:
- executors can be blacklisted with only a few failures per stage
- executors aren't blacklisted as a result of tasks in failed task sets
- stage blacklist updates correctly on stage success
- stage blacklist updates correctly on stage failure
- blacklisted executors and nodes get recovered with time
- blacklist can handle lost executors
- task failures expire with time
- task failure timeout works as expected for long-running tasksets
- only blacklist nodes for the application when enough executors have failed on that specific host
- blacklist still respects legacy configs
- check blacklist configuration invariants
- blacklisting kills executors, configured by BLACKLIST_KILL_ENABLED
- fetch failure blacklisting kills executors, configured by BLACKLIST_KILL_ENABLED
FailureSuite:
- failure in a single-stage job
- failure in a two-stage job
- failure in a map stage
- failure because task results are not serializable
- failure because task closure is not serializable
- managed memory leak error should not mask other failures (SPARK-9266
- last failure cause is sent back to driver
- failure cause stacktrace is sent back to driver if exception is not serializable
- failure cause stacktrace is sent back to driver if exception is not deserializable
- failure in tasks in a submitMapStage
- failure because cached RDD partitions are missing from DiskStore (SPARK-15736)
- SPARK-16304: Link error should not crash executor
PartitionwiseSampledRDDSuite:
- seed distribution
- concurrency
JdbcRDDSuite:
- basic functionality
- large id overflow
FileSuite:
- text files
- text files (compressed)
- SequenceFiles
- SequenceFile (compressed)
- SequenceFile with writable key
- SequenceFile with writable value
- SequenceFile with writable key and value
- implicit conversions in reading SequenceFiles
- object files of ints
- object files of complex types
- object files of classes from a JAR
- write SequenceFile using new Hadoop API
- read SequenceFile using new Hadoop API
- binary file input as byte array
- portabledatastream caching tests
- portabledatastream persist disk storage
- portabledatastream flatmap tests
- SPARK-22357 test binaryFiles minPartitions
- fixed record length binary file as byte array
- negative binary record length should raise an exception
- file caching
- prevent user from overwriting the empty directory (old Hadoop API)
- prevent user from overwriting the non-empty directory (old Hadoop API)
- allow user to disable the output directory existence checking (old Hadoop API)
- prevent user from overwriting the empty directory (new Hadoop API)
- prevent user from overwriting the non-empty directory (new Hadoop API)
- allow user to disable the output directory existence checking (new Hadoop API
- save Hadoop Dataset through old Hadoop API
- save Hadoop Dataset through new Hadoop API
- Get input files via old Hadoop API
- Get input files via new Hadoop API
- spark.files.ignoreCorruptFiles should work both HadoopRDD and NewHadoopRDD
- spark.hadoopRDD.ignoreEmptySplits work correctly (old Hadoop API)
- spark.hadoopRDD.ignoreEmptySplits work correctly (new Hadoop API)
- spark.files.ignoreMissingFiles should work both HadoopRDD and NewHadoopRDD
SparkContextSuite:
- Only one SparkContext may be active at a time
- Can still construct a new SparkContext after failing to construct a previous one
- Check for multiple SparkContexts can be disabled via undocumented debug option
- Test getOrCreate
- BytesWritable implicit conversion is correct
- basic case for addFile and listFiles
- add and list jar files
- SPARK-17650: malformed url's throw exceptions before bricking Executors
- addFile recursive works
- addFile recursive can't add directories by default
- cannot call addFile with different paths that have the same filename
- addJar can be called twice with same file in local-mode (SPARK-16787)
- addFile can be called twice with same file in local-mode (SPARK-16787)
- addJar can be called twice with same file in non-local-mode (SPARK-16787)
- addFile can be called twice with same file in non-local-mode (SPARK-16787)
- add jar with invalid path
- SPARK-22585 addJar argument without scheme is interpreted literally without url decoding
- Cancelling job group should not cause SparkContext to shutdown (SPARK-6414)
- Comma separated paths for newAPIHadoopFile/wholeTextFiles/binaryFiles (SPARK-7155)
- Default path for file based RDDs is properly set (SPARK-12517)
- calling multiple sc.stop() must not throw any exception
- No exception when both num-executors and dynamic allocation set.
- localProperties are inherited by spawned threads.
- localProperties do not cross-talk between threads.
- log level case-insensitive and reset log level
- register and deregister Spark listener from SparkContext
- Cancelling stages/jobs with custom reasons.
- client mode with a k8s master url
- Killing tasks that raise interrupted exception on cancel
- Killing tasks that raise runtime exception on cancel
java.lang.Throwable
	at org.apache.spark.DebugFilesystem$.addOpenStream(DebugFilesystem.scala:36)
	at org.apache.spark.DebugFilesystem.open(DebugFilesystem.scala:70)
	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:769)
	at org.apache.spark.SparkContextSuite$$anonfun$12.apply$mcV$sp(SparkContextSuite.scala:622)
	at org.apache.spark.SparkContextSuite$$anonfun$12.apply(SparkContextSuite.scala:615)
- SPARK-19446: DebugFilesystem.assertNoOpenStreams should report open streams to help debugging
	at org.apache.spark.SparkContextSuite$$anonfun$12.apply(SparkContextSuite.scala:615)
	at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:103)
	at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:183)
	at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:196)
	at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:196)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:289)
	at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:196)
	at org.apache.spark.SparkContextSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkContextSuite.scala:40)
	at org.scalatest.BeforeAndAfterEach$class.runTest(BeforeAndAfterEach.scala:221)
	at org.apache.spark.SparkContextSuite.runTest(SparkContextSuite.scala:40)
	at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:229)
	at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:229)
	at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:396)
	at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:384)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:384)
	at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:379)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:461)
	at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:229)
	at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
	at org.scalatest.Suite$class.run(Suite.scala:1147)
	at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
	at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:233)
	at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:233)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:521)
	at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:233)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:52)
	at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:210)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:52)
	at org.scalatest.Suite$class.callExecuteOnSuite$1(Suite.scala:1210)
	at org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1257)
	at org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1255)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
	at org.scalatest.Suite$class.runNestedSuites(Suite.scala:1255)
	at org.scalatest.tools.DiscoverySuite.runNestedSuites(DiscoverySuite.scala:30)
	at org.scalatest.Suite$class.run(Suite.scala:1144)
	at org.scalatest.tools.DiscoverySuite.run(DiscoverySuite.scala:30)
	at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:45)
	at org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$1.apply(Runner.scala:1340)
	at org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$1.apply(Runner.scala:1334)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.tools.Runner$.doRunRunRunDaDoRunRun(Runner.scala:1334)
	at org.scalatest.tools.Runner$$anonfun$runOptionallyWithPassFailReporter$2.apply(Runner.scala:1011)
	at org.scalatest.tools.Runner$$anonfun$runOptionallyWithPassFailReporter$2.apply(Runner.scala:1010)
	at org.scalatest.tools.Runner$.withClassLoaderAndDispatchReporter(Runner.scala:1500)
	at org.scalatest.tools.Runner$.runOptionallyWithPassFailReporter(Runner.scala:1010)
	at org.scalatest.tools.Runner$.main(Runner.scala:827)
	at org.scalatest.tools.Runner.main(Runner.scala)
- support barrier execution mode under local mode
- support barrier execution mode under local-cluster mode
DiskBlockObjectWriterSuite:
- verify write metrics
- verify write metrics on revert
- Reopening a closed block writer
- calling revertPartialWritesAndClose() on a partial write should truncate up to commit
- calling revertPartialWritesAndClose() after commit() should have no effect
- calling revertPartialWritesAndClose() on a closed block writer should have no effect
- commit() and close() should be idempotent
- revertPartialWritesAndClose() should be idempotent
- commit() and close() without ever opening or writing
ThreadingSuite:
- accessing SparkContext form a different thread
- accessing SparkContext form multiple threads
- accessing multi-threaded SparkContext form multiple threads
- parallel job execution
- set local properties in different thread
- set and get local properties in parent-children thread
- mutation in parent local property does not affect child (SPARK-10563)
PythonRDDSuite:
- Writing large strings to the worker
- Handle nulls gracefully
- python server error handling
ShuffleDependencySuite:
- key, value, and combiner classes correct in shuffle dependency without aggregation
- key, value, and combiner classes available in shuffle dependency with aggregation
- combineByKey null combiner class tag handled correctly
JVMObjectTrackerSuite:
- JVMObjectId does not take null IDs
- JVMObjectTracker
ClosureCleanerSuite2:
- get inner closure classes
- get outer classes and objects
- get outer classes and objects with nesting
- find accessed fields
- find accessed fields with nesting
- clean basic serializable closures
- clean basic non-serializable closures
- clean basic nested serializable closures
- clean basic nested non-serializable closures
- clean complicated nested serializable closures
- clean complicated nested non-serializable closures
- verify nested non-LMF closures !!! CANCELED !!!
  ClosureCleanerSuite2.supportsLMFs was false (ClosureCleanerSuite2.scala:579)
PartitionPruningRDDSuite:
- Pruned Partitions inherit locality prefs correctly
- Pruned Partitions can be unioned 
SimpleDateParamSuite:
- date parsing
StorageSuite:
- storage status add non-RDD blocks
- storage status add RDD blocks
- storage status getBlock
- storage status memUsed, diskUsed, externalBlockStoreUsed
- storage memUsed, diskUsed with on-heap and off-heap blocks
- old SparkListenerBlockManagerAdded event compatible
CausedBySuite:
- For an error without a cause, should return the error
- For an error with a cause, should return the cause of the error
- For an error with a cause that itself has a cause, return the root cause
JavaUtilsSuite:
- containsKey implementation without iteratively entrySet call
KryoBenchmark:
- Benchmark Kryo Unsafe vs safe Serialization !!! IGNORED !!!
FileAppenderSuite:
- basic file appender
- rolling file appender - time-based rolling
- rolling file appender - time-based rolling (compressed)
- rolling file appender - size-based rolling
- rolling file appender - size-based rolling (compressed)
- rolling file appender - cleaning
- file appender selection
- file appender async close stream abruptly
- file appender async close stream gracefully
BypassMergeSortShuffleWriterSuite:
- write empty iterator
- write with some empty partitions
- only generate temp shuffle file for non-empty partition
- cleanup of intermediate files after errors
DistributedSuite:
- task throws not serializable exception
- local-cluster format
- simple groupByKey
- groupByKey where map output sizes exceed maxMbInFlight
- accumulators
- broadcast variables
- repeatedly failing task
- repeatedly failing task that crashes JVM
- repeatedly failing task that crashes JVM with a zero exit code (SPARK-16925)
- caching (encryption = off)
- caching (encryption = on)
- caching on disk (encryption = off)
- caching on disk (encryption = on)
- caching in memory, replicated (encryption = off)
- caching in memory, replicated (encryption = off) (with replication as stream)
- caching in memory, replicated (encryption = on)
- caching in memory, replicated (encryption = on) (with replication as stream)
- caching in memory, serialized, replicated (encryption = off)
- caching in memory, serialized, replicated (encryption = off) (with replication as stream)
- caching in memory, serialized, replicated (encryption = on)
- caching in memory, serialized, replicated (encryption = on) (with replication as stream)
- caching on disk, replicated (encryption = off)
- caching on disk, replicated (encryption = off) (with replication as stream)
- caching on disk, replicated (encryption = on)
- caching on disk, replicated (encryption = on) (with replication as stream)
- caching in memory and disk, replicated (encryption = off)
- caching in memory and disk, replicated (encryption = off) (with replication as stream)
- caching in memory and disk, replicated (encryption = on)
- caching in memory and disk, replicated (encryption = on) (with replication as stream)
- caching in memory and disk, serialized, replicated (encryption = off)
- caching in memory and disk, serialized, replicated (encryption = off) (with replication as stream)
- caching in memory and disk, serialized, replicated (encryption = on)
- caching in memory and disk, serialized, replicated (encryption = on) (with replication as stream)
- compute without caching when no partitions fit in memory
- compute when only some partitions fit in memory
- passing environment variables to cluster
- recover from node failures
- recover from repeated node failures during shuffle-map
- recover from repeated node failures during shuffle-reduce
- recover from node failures with replication
- unpersist RDDs
FutureActionSuite:
- simple async action
- complex async action
LocalCheckpointSuite:
- transform storage level
- basic lineage truncation
- basic lineage truncation - caching before checkpointing
- basic lineage truncation - caching after checkpointing
- indirect lineage truncation
- indirect lineage truncation - caching before checkpointing
- indirect lineage truncation - caching after checkpointing
- checkpoint without draining iterator
- checkpoint without draining iterator - caching before checkpointing
- checkpoint without draining iterator - caching after checkpointing
- checkpoint blocks exist
- checkpoint blocks exist - caching before checkpointing
- checkpoint blocks exist - caching after checkpointing
- missing checkpoint block fails with informative message
WorkerWatcherSuite:
- WorkerWatcher shuts down on valid disassociation
- WorkerWatcher stays alive on invalid disassociation
NettyRpcEnvSuite:
- send a message locally
- send a message remotely
- send a RpcEndpointRef
- ask a message locally
- ask a message remotely
- ask a message timeout
- onStart and onStop
- onError: error in onStart
- onError: error in onStop
- onError: error in receive
- self: call in onStart
- self: call in receive
- self: call in onStop
- call receive in sequence
- stop(RpcEndpointRef) reentrant
- sendWithReply
- sendWithReply: remotely
- sendWithReply: error
- sendWithReply: remotely error
- network events in sever RpcEnv when another RpcEnv is in server mode
- network events in sever RpcEnv when another RpcEnv is in client mode
- network events in client RpcEnv when another RpcEnv is in server mode
- sendWithReply: unserializable error
- port conflict
- send with authentication
- send with SASL encryption
- send with AES encryption
- ask with authentication
- ask with SASL encryption
- ask with AES encryption
- construct RpcTimeout with conf property
- ask a message timeout on Future using RpcTimeout
- file server
- SPARK-14699: RpcEnv.shutdown should not fire onDisconnected events
- non-existent endpoint
- advertise address different from bind address
- RequestMessage serialization
PagedTableSuite:
- pageNavigation
ClientSuite:
- correctly validates driver jar URL's
BlockIdSuite:
- test-bad-deserialization
- rdd
- shuffle
- shuffle data
- shuffle index
- broadcast
- taskresult
- stream
- temp local
- temp shuffle
- test
PartiallyUnrolledIteratorSuite:
- join two iterators
KryoSerializerResizableOutputSuite:
- kryo without resizable output buffer should fail on large array
- kryo with resizable output buffer should succeed on large array
BlockManagerReplicationSuite:
- get peers with addition and removal of block managers
- block replication - 2x replication
- block replication - 3x replication
- block replication - mixed between 1x to 5x
- block replication - off-heap
- block replication - 2x replication without peers
- block replication - replication failures
- block replication - addition and deletion of block managers
BarrierTaskContextSuite:
- global sync by barrier() call
- support multiple barrier() call within a single task
- throw exception on barrier() call timeout
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@244b4f3 rejected from java.util.concurrent.ThreadPoolExecutor@47463adf[Shutting down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 0]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
	at java.util.concurrent.Executors$DelegatedExecutorService.execute(Executors.java:668)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.Promise$class.failure(Promise.scala:104)
	at scala.concurrent.impl.Promise$DefaultPromise.failure(Promise.scala:157)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:194)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:192)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
	at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@647e712 rejected from java.util.concurrent.ThreadPoolExecutor@434569be[Shutting down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 0]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
	at java.util.concurrent.Executors$DelegatedExecutorService.execute(Executors.java:668)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.Promise$class.failure(Promise.scala:104)
	at scala.concurrent.impl.Promise$DefaultPromise.failure(Promise.scala:157)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:194)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:192)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
	at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
- throw exception if barrier() call doesn't happen on every task
- throw exception if the number of barrier() calls are not the same on every task
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@58dadf75 rejected from java.util.concurrent.ThreadPoolExecutor@1dd84a0a[Shutting down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 0]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
	at java.util.concurrent.Executors$DelegatedExecutorService.execute(Executors.java:668)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.Promise$class.failure(Promise.scala:104)
	at scala.concurrent.impl.Promise$DefaultPromise.failure(Promise.scala:157)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:194)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:192)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
	at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
BlockStoreShuffleReaderSuite:
- read() releases resources on completion
WholeTextFileRecordReaderSuite:
- Correctness of WholeTextFileRecordReader.
- Correctness of WholeTextFileRecordReader with GzipCodec.
SubmitRestProtocolSuite:
- validate
- request to and from JSON
- response to and from JSON
- CreateSubmissionRequest
- CreateSubmissionResponse
- KillSubmissionResponse
- SubmissionStatusResponse
- ErrorResponse
FlatmapIteratorSuite:
- Flatmap Iterator to Disk
- Flatmap Iterator to Memory
- Serializer Reset
SizeEstimatorSuite:
- simple classes
- primitive wrapper objects
- class field blocks rounding
- strings
- primitive arrays
- object arrays
- 32-bit arch
- 64-bit arch with no compressed oops
- class field blocks rounding on 64-bit VM without useCompressedOops
- check 64-bit detection for s390x arch
- SizeEstimation can provide the estimated size
ElementTrackingStoreSuite:
- tracking for multiple types
PipedRDDSuite:
- basic pipe
- basic pipe with tokenization
- failure in iterating over pipe input
- advanced pipe
- pipe with empty partition
- pipe with env variable
- pipe with process which cannot be launched due to bad command
cat: nonexistent_file: No such file or directory
cat: nonexistent_file: No such file or directory
- pipe with process which is launched but fails with non-zero exit status
- basic pipe with separate working directory
- test pipe exports map_input_file
- test pipe exports mapreduce_map_input_file
AccumulatorV2Suite:
- LongAccumulator add/avg/sum/count/isZero
- DoubleAccumulator add/avg/sum/count/isZero
- ListAccumulator
- LegacyAccumulatorWrapper
- LegacyAccumulatorWrapper with AccumulatorParam that has no equals/hashCode
InboxSuite:
- post
- post: with reply
- post: multiple threads
- post: Associated
- post: Disassociated
- post: AssociationError
MasterWebUISuite:
- kill application
- kill driver
RadixSortSuite:
- radix support for unsigned binary data asc nulls first
- sort unsigned binary data asc nulls first
- sort key prefix unsigned binary data asc nulls first
- fuzz test unsigned binary data asc nulls first with random bitmasks
- fuzz test key prefix unsigned binary data asc nulls first with random bitmasks
- radix support for unsigned binary data asc nulls last
- sort unsigned binary data asc nulls last
- sort key prefix unsigned binary data asc nulls last
- fuzz test unsigned binary data asc nulls last with random bitmasks
- fuzz test key prefix unsigned binary data asc nulls last with random bitmasks
- radix support for unsigned binary data desc nulls last
- sort unsigned binary data desc nulls last
- sort key prefix unsigned binary data desc nulls last
- fuzz test unsigned binary data desc nulls last with random bitmasks
- fuzz test key prefix unsigned binary data desc nulls last with random bitmasks
- radix support for unsigned binary data desc nulls first
- sort unsigned binary data desc nulls first
- sort key prefix unsigned binary data desc nulls first
- fuzz test unsigned binary data desc nulls first with random bitmasks
- fuzz test key prefix unsigned binary data desc nulls first with random bitmasks
- radix support for twos complement asc nulls first
- sort twos complement asc nulls first
- sort key prefix twos complement asc nulls first
- fuzz test twos complement asc nulls first with random bitmasks
- fuzz test key prefix twos complement asc nulls first with random bitmasks
- radix support for twos complement asc nulls last
- sort twos complement asc nulls last
- sort key prefix twos complement asc nulls last
- fuzz test twos complement asc nulls last with random bitmasks
- fuzz test key prefix twos complement asc nulls last with random bitmasks
- radix support for twos complement desc nulls last
- sort twos complement desc nulls last
- sort key prefix twos complement desc nulls last
- fuzz test twos complement desc nulls last with random bitmasks
- fuzz test key prefix twos complement desc nulls last with random bitmasks
- radix support for twos complement desc nulls first
- sort twos complement desc nulls first
- sort key prefix twos complement desc nulls first
- fuzz test twos complement desc nulls first with random bitmasks
- fuzz test key prefix twos complement desc nulls first with random bitmasks
- radix support for binary data partial
- sort binary data partial
- sort key prefix binary data partial
- fuzz test binary data partial with random bitmasks
- fuzz test key prefix binary data partial with random bitmasks
DiskBlockManagerSuite:
- basic block creation
- enumerating blocks
- SPARK-22227: non-block files are skipped
WorkerArgumentsTest:
- Memory can't be set to 0 when cmd line args leave off M or G
- Memory can't be set to 0 when SPARK_WORKER_MEMORY env property leaves off M or G
- Memory correctly set when SPARK_WORKER_MEMORY env property appends G
- Memory correctly set from args with M appended to memory value
StatusTrackerSuite:
- basic status API usage
- getJobIdsForGroup()
- getJobIdsForGroup() with takeAsync()
- getJobIdsForGroup() with takeAsync() across multiple partitions
PrimitiveKeyOpenHashMapSuite:
- size for specialized, primitive key, value (int, int)
- initialization
- basic operations
- null values
- changeValue
- inserting in capacity-1 map
- contains
ApplicationCacheSuite:
- Completed UI get
- Test that if an attempt ID is set, it must be used in lookups
- Incomplete apps refreshed
- Large Scale Application Eviction
- Attempts are Evicted
- redirect includes query params
StandaloneDynamicAllocationSuite:
- dynamic allocation default behavior
- dynamic allocation with max cores <= cores per worker
- dynamic allocation with max cores > cores per worker
- dynamic allocation with cores per executor
- dynamic allocation with cores per executor AND max cores
- kill the same executor twice (SPARK-9795)
- the pending replacement executors should not be lost (SPARK-10515)
- disable force kill for busy executors (SPARK-9552)
- initial executor limit
- kill all executors on localhost
- executor registration on a blacklisted host must fail
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@578ce68d rejected from java.util.concurrent.ThreadPoolExecutor@128e6218[Shutting down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 18]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
	at java.util.concurrent.Executors$DelegatedExecutorService.execute(Executors.java:668)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.Promise$class.failure(Promise.scala:104)
	at scala.concurrent.impl.Promise$DefaultPromise.failure(Promise.scala:157)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:194)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:192)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
	at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@4704d5a2 rejected from java.util.concurrent.ThreadPoolExecutor@2065326[Shutting down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 20]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
	at java.util.concurrent.Executors$DelegatedExecutorService.execute(Executors.java:668)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.Promise$class.failure(Promise.scala:104)
	at scala.concurrent.impl.Promise$DefaultPromise.failure(Promise.scala:157)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:194)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:192)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
	at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
ExternalClusterManagerSuite:
- launch of backend and scheduler
LogUrlsStandaloneSuite:
- verify that correct log urls get propagated from workers
- verify that log urls reflect SPARK_PUBLIC_DNS (SPARK-6175)
AppClientSuite:
- interface methods of AppClient using local Master
- request from AppClient before initialized with master
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@4448a6ed rejected from java.util.concurrent.ThreadPoolExecutor@14a2000[Shutting down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 0]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
	at java.util.concurrent.Executors$DelegatedExecutorService.execute(Executors.java:668)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.Promise$class.failure(Promise.scala:104)
	at scala.concurrent.impl.Promise$DefaultPromise.failure(Promise.scala:157)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:194)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:192)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
	at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
InternalAccumulatorSuite:
- internal accumulators in TaskContext
- internal accumulators in a stage
- internal accumulators in multiple stages
- internal accumulators in resubmitted stages
- internal accumulators are registered for cleanups
JsonProtocolSuite:
- SparkListenerEvent
- Dependent Classes
- ExceptionFailure backward compatibility: full stack trace
- StageInfo backward compatibility (details, accumulables)
- InputMetrics backward compatibility
- Input/Output records backwards compatibility
- Shuffle Read/Write records backwards compatibility
- OutputMetrics backward compatibility
- BlockManager events backward compatibility
- FetchFailed backwards compatibility
- ShuffleReadMetrics: Local bytes read backwards compatibility
- SparkListenerApplicationStart backwards compatibility
- ExecutorLostFailure backward compatibility
- SparkListenerJobStart backward compatibility
- SparkListenerJobStart and SparkListenerJobEnd backward compatibility
- RDDInfo backward compatibility (scope, parent IDs, callsite)
- StageInfo backward compatibility (parent IDs)
- TaskCommitDenied backward compatibility
- AccumulableInfo backward compatibility
- ExceptionFailure backward compatibility: accumulator updates
- ExecutorMetricsUpdate backward compatibility: executor metrics update
- executorMetricsFromJson backward compatibility: handle missing metrics
- AccumulableInfo value de/serialization
BroadcastSuite:
- Using TorrentBroadcast locally
- Accessing TorrentBroadcast variables from multiple threads
- Accessing TorrentBroadcast variables in a local cluster (encryption = off)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@70886d65 rejected from java.util.concurrent.ThreadPoolExecutor@358aed10[Shutting down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 0]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
	at java.util.concurrent.Executors$DelegatedExecutorService.execute(Executors.java:668)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.Promise$class.failure(Promise.scala:104)
	at scala.concurrent.impl.Promise$DefaultPromise.failure(Promise.scala:157)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:194)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:192)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
	at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@646cbdff rejected from java.util.concurrent.ThreadPoolExecutor@216e2f16[Shutting down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 0]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
	at java.util.concurrent.Executors$DelegatedExecutorService.execute(Executors.java:668)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.Promise$class.failure(Promise.scala:104)
	at scala.concurrent.impl.Promise$DefaultPromise.failure(Promise.scala:157)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:194)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:192)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
	at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Exception in thread "dispatcher-event-loop-13" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "dispatcher-event-loop-13" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "dispatcher-event-loop-16" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "dispatcher-event-loop-13" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "dispatcher-event-loop-17" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "dispatcher-event-loop-17" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "dispatcher-event-loop-17" java.lang.OutOfMemoryError: GC overhead limit exceeded
10/16/18 1:01:29 AM ============================================================

-- Gauges ----------------------------------------------------------------------
master.aliveWorkers
Exception in thread "RemoteBlock-temp-file-clean-thread" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "Spark Context Cleaner" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "ScalaTest-dispatcher" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "RemoteBlock-temp-file-clean-thread" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "RemoteBlock-temp-file-clean-thread" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "dispatcher-event-loop-4" java.lang.OutOfMemoryError: GC overhead limit exceeded
An exception or error caused a run to abort. This may have been caused by a problematic custom reporter.
java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "dispatcher-event-loop-28" java.lang.OutOfMemoryError: GC overhead limit exceeded
[INFO] 
[INFO] --------------< org.apache.spark:spark-mllib-local_2.11 >---------------
[INFO] Building Spark Project ML Local Library 3.0.0-SNAPSHOT           [10/30]
[INFO] --------------------------------[ jar ]---------------------------------
[INFO] 
[INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (enforce-versions) @ spark-mllib-local_2.11 ---
[INFO] 
[INFO] --- scala-maven-plugin:3.2.2:add-source (eclipse-add-source) @ spark-mllib-local_2.11 ---
[INFO] Add Source directory: /home/jenkins/workspace/spark-master-test-maven-hadoop-2.6/mllib-local/src/main/scala
[INFO] Add Test Source directory: /home/jenkins/workspace/spark-master-test-maven-hadoop-2.6/mllib-local/src/test/scala
[INFO] 
[INFO] --- maven-dependency-plugin:3.0.2:build-classpath (default-cli) @ spark-mllib-local_2.11 ---
[INFO] Dependencies classpath:
/home/jenkins/.m2/repository/org/scalanlp/breeze_2.11/0.13.2/breeze_2.11-0.13.2.jar:/home/jenkins/.m2/repository/org/spark-project/spark/unused/1.0.0/unused-1.0.0.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-reflect/2.11.12/scala-reflect-2.11.12.jar:/home/jenkins/.m2/repository/org/scalanlp/breeze-macros_2.11/0.13.2/breeze-macros_2.11-0.13.2.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-library/2.11.12/scala-library-2.11.12.jar:/home/jenkins/.m2/repository/org/typelevel/macro-compat_2.11/1.1.1/macro-compat_2.11-1.1.1.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-math3/3.4.1/commons-math3-3.4.1.jar:/home/jenkins/.m2/repository/net/sourceforge/f2j/arpack_combined_all/0.1/arpack_combined_all-0.1.jar:/home/jenkins/.m2/repository/com/chuusai/shapeless_2.11/2.3.2/shapeless_2.11-2.3.2.jar:/home/jenkins/.m2/repository/net/sf/opencsv/opencsv/2.3/opencsv-2.3.jar:/home/jenkins/.m2/repository/org/typelevel/machinist_2.11/0.6.1/machinist_2.11-0.6.1.jar:/home/jenkins/.m2/repository/org/spire-math/spire_2.11/0.13.0/spire_2.11-0.13.0.jar:/home/jenkins/workspace/spark-master-test-maven-hadoop-2.6/common/tags/target/scala-2.11/classes:/home/jenkins/.m2/repository/org/spire-math/spire-macros_2.11/0.13.0/spire-macros_2.11-0.13.0.jar:/home/jenkins/.m2/repository/org/slf4j/slf4j-api/1.7.16/slf4j-api-1.7.16.jar:/home/jenkins/.m2/repository/com/github/rwl/jtransforms/2.4.0/jtransforms-2.4.0.jar:/home/jenkins/.m2/repository/com/github/fommil/netlib/core/1.1.2/core-1.1.2.jar
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (process-resource-bundles) @ spark-mllib-local_2.11 ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ spark-mllib-local_2.11 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/jenkins/workspace/spark-master-test-maven-hadoop-2.6/mllib-local/src/main/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.7.0:compile (default-compile) @ spark-mllib-local_2.11 ---
[INFO] Not compiling main sources
[INFO] 
[INFO] --- scala-maven-plugin:3.2.2:compile (scala-compile-first) @ spark-mllib-local_2.11 ---
[INFO] Using zinc server for incremental compilation
[info] Compiling 5 Scala sources to /home/jenkins/workspace/spark-master-test-maven-hadoop-2.6/mllib-local/target/scala-2.11/classes...
[info] Compile success at Oct 16, 2018 1:12:14 AM [1.968s]
[INFO] 
[INFO] --- maven-antrun-plugin:1.8:run (create-tmp-dir) @ spark-mllib-local_2.11 ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ spark-mllib-local_2.11 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/jenkins/workspace/spark-master-test-maven-hadoop-2.6/mllib-local/src/test/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.7.0:testCompile (default-testCompile) @ spark-mllib-local_2.11 ---
[INFO] Not compiling test sources
[INFO] 
[INFO] --- maven-dependency-plugin:3.0.2:build-classpath (generate-test-classpath) @ spark-mllib-local_2.11 ---
[INFO] Dependencies classpath:
/home/jenkins/.m2/repository/org/scala-lang/modules/scala-xml_2.11/1.0.5/scala-xml_2.11-1.0.5.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest_2.11/3.0.3/scalatest_2.11-3.0.3.jar:/home/jenkins/.m2/repository/org/scalanlp/breeze_2.11/0.13.2/breeze_2.11-0.13.2.jar:/home/jenkins/.m2/repository/com/novocode/junit-interface/0.11/junit-interface-0.11.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-library/2.11.12/scala-library-2.11.12.jar:/home/jenkins/.m2/repository/org/typelevel/macro-compat_2.11/1.1.1/macro-compat_2.11-1.1.1.jar:/home/jenkins/.m2/repository/org/scalactic/scalactic_2.11/3.0.3/scalactic_2.11-3.0.3.jar:/home/jenkins/.m2/repository/net/sf/opencsv/opencsv/2.3/opencsv-2.3.jar:/home/jenkins/.m2/repository/org/typelevel/machinist_2.11/0.6.1/machinist_2.11-0.6.1.jar:/home/jenkins/.m2/repository/org/mockito/mockito-core/1.10.19/mockito-core-1.10.19.jar:/home/jenkins/.m2/repository/org/spire-math/spire_2.11/0.13.0/spire_2.11-0.13.0.jar:/home/jenkins/workspace/spark-master-test-maven-hadoop-2.6/common/tags/target/scala-2.11/test-classes:/home/jenkins/.m2/repository/org/slf4j/slf4j-api/1.7.16/slf4j-api-1.7.16.jar:/home/jenkins/.m2/repository/com/github/fommil/netlib/core/1.1.2/core-1.1.2.jar:/home/jenkins/.m2/repository/org/objenesis/objenesis/2.1/objenesis-2.1.jar:/home/jenkins/.m2/repository/org/spark-project/spark/unused/1.0.0/unused-1.0.0.jar:/home/jenkins/.m2/repository/org/scala-sbt/test-interface/1.0/test-interface-1.0.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-reflect/2.11.12/scala-reflect-2.11.12.jar:/home/jenkins/.m2/repository/org/scalanlp/breeze-macros_2.11/0.13.2/breeze-macros_2.11-0.13.2.jar:/home/jenkins/.m2/repository/org/hamcrest/hamcrest-core/1.3/hamcrest-core-1.3.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-math3/3.4.1/commons-math3-3.4.1.jar:/home/jenkins/.m2/repository/net/sourceforge/f2j/arpack_combined_all/0.1/arpack_combined_all-0.1.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-parser-combinators_2.11/1.1.0/scala-parser-combinators_2.11-1.1.0.jar:/home/jenkins/.m2/repository/com/chuusai/shapeless_2.11/2.3.2/shapeless_2.11-2.3.2.jar:/home/jenkins/.m2/repository/junit/junit/4.12/junit-4.12.jar:/home/jenkins/workspace/spark-master-test-maven-hadoop-2.6/common/tags/target/scala-2.11/classes:/home/jenkins/.m2/repository/org/spire-math/spire-macros_2.11/0.13.0/spire-macros_2.11-0.13.0.jar:/home/jenkins/.m2/repository/org/scalacheck/scalacheck_2.11/1.13.5/scalacheck_2.11-1.13.5.jar:/home/jenkins/.m2/repository/com/github/rwl/jtransforms/2.4.0/jtransforms-2.4.0.jar
[INFO] 
[INFO] --- scala-maven-plugin:3.2.2:testCompile (scala-test-compile-first) @ spark-mllib-local_2.11 ---
[INFO] Using zinc server for incremental compilation
[info] Compile success at Oct 16, 2018 1:12:14 AM [0.073s]
[INFO] 
[INFO] --- maven-surefire-plugin:2.22.0:test (default-test) @ spark-mllib-local_2.11 ---
[INFO] 
[INFO] -------------------------------------------------------
[INFO]  T E S T S
[INFO] -------------------------------------------------------
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
[INFO] 
[INFO] 
[INFO] --- maven-surefire-plugin:2.22.0:test (test) @ spark-mllib-local_2.11 ---
[INFO] Skipping execution of surefire because it has already been run for this configuration
[INFO] 
[INFO] --- scalatest-maven-plugin:1.0:test (test) @ spark-mllib-local_2.11 ---
Discovery starting.
Discovery completed in 210 milliseconds.
Run starting. Expected test count is: 85
BLASSuite:
- copy
Oct 16, 2018 1:12:15 AM com.github.fommil.netlib.BLAS <clinit>
WARNING: Failed to load implementation from: com.github.fommil.netlib.NativeSystemBLAS
Oct 16, 2018 1:12:15 AM com.github.fommil.netlib.BLAS <clinit>
WARNING: Failed to load implementation from: com.github.fommil.netlib.NativeRefBLAS
- scal
- axpy
- dot
- spr
- syr
- gemm
- gemv
- spmv
UtilsSuite:
- EPSILON
TestingUtilsSuite:
- Comparing doubles using relative error.
- Comparing doubles using absolute error.
- Comparing vectors using relative error.
- Comparing vectors using absolute error.
- Comparing Matrices using absolute error.
- Comparing Matrices using relative error.
BreezeMatrixConversionSuite:
- dense matrix to breeze
- dense breeze matrix to matrix
- sparse matrix to breeze
- sparse breeze matrix to sparse matrix
BreezeVectorConversionSuite:
- dense to breeze
- sparse to breeze
- dense breeze to vector
- sparse breeze to vector
- sparse breeze with partially-used arrays to vector
MultivariateGaussianSuite:
Oct 16, 2018 1:12:16 AM com.github.fommil.netlib.LAPACK <clinit>
WARNING: Failed to load implementation from: com.github.fommil.netlib.NativeSystemLAPACK
Oct 16, 2018 1:12:16 AM com.github.fommil.netlib.LAPACK <clinit>
WARNING: Failed to load implementation from: com.github.fommil.netlib.NativeRefLAPACK
- univariate
- multivariate
- multivariate degenerate
- SPARK-11302
MatricesSuite:
- dense matrix construction
- dense matrix construction with wrong dimension
- sparse matrix construction
- sparse matrix construction with wrong number of elements
- index in matrices incorrect input
- equals
- matrix copies are deep copies
- matrix indexing and updating
- dense to dense
- dense to sparse
- sparse to sparse
- sparse to dense
- compressed dense
- compressed sparse
- map, update
- transpose
- foreachActive
- horzcat, vertcat, eye, speye
- zeros
- ones
- eye
- rand
- randn
- diag
- sprand
- sprandn
- toString
- numNonzeros and numActives
- fromBreeze with sparse matrix
- row/col iterator
VectorsSuite:
- dense vector construction with varargs
- dense vector construction from a double array
- sparse vector construction
- sparse vector construction with unordered elements
- sparse vector construction with mismatched indices/values array
- sparse vector construction with too many indices vs size
- sparse vector construction with negative indices
- dense to array
- dense argmax
- sparse to array
- sparse argmax
- vector equals
- vectors equals with explicit 0
- indexing dense vectors
- indexing sparse vectors
- zeros
- Vector.copy
- fromBreeze
- sqdist
- foreachActive
- vector p-norm
- Vector numActive and numNonzeros
- Vector toSparse and toDense
- Vector.compressed
- SparseVector.slice
- sparse vector only support non-negative length
Run completed in 1 second, 617 milliseconds.
Total number of tests run: 85
Suites: completed 9, aborted 0
Tests: succeeded 85, failed 0, canceled 0, ignored 0, pending 0
All tests passed.
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project GraphX
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project Streaming
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project Catalyst
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project SQL
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project ML Library
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] -----------------< org.apache.spark:spark-tools_2.11 >------------------
[INFO] Building Spark Project Tools 3.0.0-SNAPSHOT                      [11/30]
[INFO] --------------------------------[ jar ]---------------------------------
[INFO] 
[INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (enforce-versions) @ spark-tools_2.11 ---
[INFO] 
[INFO] --- scala-maven-plugin:3.2.2:add-source (eclipse-add-source) @ spark-tools_2.11 ---
[INFO] Add Source directory: /home/jenkins/workspace/spark-master-test-maven-hadoop-2.6/tools/src/main/scala
[INFO] Add Test Source directory: /home/jenkins/workspace/spark-master-test-maven-hadoop-2.6/tools/src/test/scala
[INFO] 
[INFO] --- maven-dependency-plugin:3.0.2:build-classpath (default-cli) @ spark-tools_2.11 ---
[INFO] Dependencies classpath:
/home/jenkins/.m2/repository/org/scala-lang/modules/scala-xml_2.11/1.0.5/scala-xml_2.11-1.0.5.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm-tree/5.1/asm-tree-5.1.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm-util/5.1/asm-util-5.1.jar:/home/jenkins/.m2/repository/org/spark-project/spark/unused/1.0.0/unused-1.0.0.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm-commons/5.1/asm-commons-5.1.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-reflect/2.11.12/scala-reflect-2.11.12.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-library/2.11.12/scala-library-2.11.12.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-compiler/2.11.12/scala-compiler-2.11.12.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-parser-combinators_2.11/1.1.0/scala-parser-combinators_2.11-1.1.0.jar:/home/jenkins/.m2/repository/org/clapper/grizzled-scala_2.11/4.2.0/grizzled-scala_2.11-4.2.0.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm/5.1/asm-5.1.jar:/home/jenkins/.m2/repository/org/clapper/classutil_2.11/1.1.2/classutil_2.11-1.1.2.jar
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (process-resource-bundles) @ spark-tools_2.11 ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ spark-tools_2.11 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/jenkins/workspace/spark-master-test-maven-hadoop-2.6/tools/src/main/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.7.0:compile (default-compile) @ spark-tools_2.11 ---
[INFO] Not compiling main sources
[INFO] 
[INFO] --- scala-maven-plugin:3.2.2:compile (scala-compile-first) @ spark-tools_2.11 ---
[INFO] Using zinc server for incremental compilation
[info] Compile success at Oct 16, 2018 1:12:17 AM [0.027s]
[INFO] 
[INFO] --- maven-antrun-plugin:1.8:run (create-tmp-dir) @ spark-tools_2.11 ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ spark-tools_2.11 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/jenkins/workspace/spark-master-test-maven-hadoop-2.6/tools/src/test/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.7.0:testCompile (default-testCompile) @ spark-tools_2.11 ---
[INFO] Not compiling test sources
[INFO] 
[INFO] --- maven-dependency-plugin:3.0.2:build-classpath (generate-test-classpath) @ spark-tools_2.11 ---
[INFO] Dependencies classpath:
/home/jenkins/.m2/repository/org/scala-lang/modules/scala-xml_2.11/1.0.5/scala-xml_2.11-1.0.5.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm-tree/5.1/asm-tree-5.1.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest_2.11/3.0.3/scalatest_2.11-3.0.3.jar:/home/jenkins/.m2/repository/org/spark-project/spark/unused/1.0.0/unused-1.0.0.jar:/home/jenkins/.m2/repository/org/scala-sbt/test-interface/1.0/test-interface-1.0.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm-commons/5.1/asm-commons-5.1.jar:/home/jenkins/.m2/repository/com/novocode/junit-interface/0.11/junit-interface-0.11.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-reflect/2.11.12/scala-reflect-2.11.12.jar:/home/jenkins/.m2/repository/org/hamcrest/hamcrest-core/1.3/hamcrest-core-1.3.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-library/2.11.12/scala-library-2.11.12.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-compiler/2.11.12/scala-compiler-2.11.12.jar:/home/jenkins/.m2/repository/org/scalactic/scalactic_2.11/3.0.3/scalactic_2.11-3.0.3.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-parser-combinators_2.11/1.1.0/scala-parser-combinators_2.11-1.1.0.jar:/home/jenkins/.m2/repository/org/clapper/grizzled-scala_2.11/4.2.0/grizzled-scala_2.11-4.2.0.jar:/home/jenkins/.m2/repository/junit/junit/4.12/junit-4.12.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm-util/5.1/asm-util-5.1.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm/5.1/asm-5.1.jar:/home/jenkins/.m2/repository/org/clapper/classutil_2.11/1.1.2/classutil_2.11-1.1.2.jar
[INFO] 
[INFO] --- scala-maven-plugin:3.2.2:testCompile (scala-test-compile-first) @ spark-tools_2.11 ---
[INFO] No sources to compile
[INFO] 
[INFO] --- maven-surefire-plugin:2.22.0:test (default-test) @ spark-tools_2.11 ---
[INFO] 
[INFO] --- maven-surefire-plugin:2.22.0:test (test) @ spark-tools_2.11 ---
[INFO] Skipping execution of surefire because it has already been run for this configuration
[INFO] 
[INFO] --- scalatest-maven-plugin:1.0:test (test) @ spark-tools_2.11 ---
Discovery starting.
Discovery completed in 54 milliseconds.
Run starting. Expected test count is: 0
DiscoverySuite:
Run completed in 83 milliseconds.
Total number of tests run: 0
Suites: completed 1, aborted 0
Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
No tests were executed.
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project Hive
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project REPL
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --------------< org.apache.spark:spark-network-yarn_2.11 >--------------
[INFO] Building Spark Project YARN Shuffle Service 3.0.0-SNAPSHOT       [12/30]
[INFO] --------------------------------[ jar ]---------------------------------
[INFO] 
[INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (enforce-versions) @ spark-network-yarn_2.11 ---
[INFO] 
[INFO] --- scala-maven-plugin:3.2.2:add-source (eclipse-add-source) @ spark-network-yarn_2.11 ---
[INFO] Add Source directory: /home/jenkins/workspace/spark-master-test-maven-hadoop-2.6/common/network-yarn/src/main/scala
[INFO] Add Test Source directory: /home/jenkins/workspace/spark-master-test-maven-hadoop-2.6/common/network-yarn/src/test/scala
[INFO] 
[INFO] --- maven-dependency-plugin:3.0.2:build-classpath (default-cli) @ spark-network-yarn_2.11 ---
[INFO] Dependencies classpath:
/home/jenkins/.m2/repository/org/apache/commons/commons-crypto/1.0.0/commons-crypto-1.0.0.jar:/home/jenkins/.m2/repository/org/spark-project/spark/unused/1.0.0/unused-1.0.0.jar:/home/jenkins/workspace/spark-master-test-maven-hadoop-2.6/common/network-shuffle/target/scala-2.11/classes:/home/jenkins/.m2/repository/io/dropwizard/metrics/metrics-core/3.1.5/metrics-core-3.1.5.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.9.6/jackson-core-2.9.6.jar:/home/jenkins/.m2/repository/com/google/code/findbugs/jsr305/1.3.9/jsr305-1.3.9.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar:/home/jenkins/workspace/spark-master-test-maven-hadoop-2.6/common/network-common/target/scala-2.11/classes:/home/jenkins/.m2/repository/org/fusesource/leveldbjni/leveldbjni-all/1.8/leveldbjni-all-1.8.jar:/home/jenkins/.m2/repository/io/netty/netty-all/4.1.17.Final/netty-all-4.1.17.Final.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-lang3/3.5/commons-lang3-3.5.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.9.6/jackson-annotations-2.9.6.jar
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (process-resource-bundles) @ spark-network-yarn_2.11 ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ spark-network-yarn_2.11 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/jenkins/workspace/spark-master-test-maven-hadoop-2.6/common/network-yarn/src/main/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.7.0:compile (default-compile) @ spark-network-yarn_2.11 ---
[INFO] Not compiling main sources
[INFO] 
[INFO] --- scala-maven-plugin:3.2.2:compile (scala-compile-first) @ spark-network-yarn_2.11 ---
[INFO] Using zinc server for incremental compilation
[info] Compiling 3 Java sources to /home/jenkins/workspace/spark-master-test-maven-hadoop-2.6/common/network-yarn/target/scala-2.11/classes...
[info] Compile success at Oct 16, 2018 1:12:20 AM [1.053s]
[INFO] 
[INFO] --- maven-antrun-plugin:1.8:run (create-tmp-dir) @ spark-network-yarn_2.11 ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ spark-network-yarn_2.11 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/jenkins/workspace/spark-master-test-maven-hadoop-2.6/common/network-yarn/src/test/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.7.0:testCompile (default-testCompile) @ spark-network-yarn_2.11 ---
[INFO] Not compiling test sources
[INFO] 
[INFO] --- maven-dependency-plugin:3.0.2:build-classpath (generate-test-classpath) @ spark-network-yarn_2.11 ---
[INFO] Dependencies classpath:
/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-yarn-common/2.7.3/hadoop-yarn-common-2.7.3.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest_2.11/3.0.3/scalatest_2.11-3.0.3.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-yarn-api/2.7.3/hadoop-yarn-api-2.7.3.jar:/home/jenkins/.m2/repository/log4j/log4j/1.2.17/log4j-1.2.17.jar:/home/jenkins/.m2/repository/com/novocode/junit-interface/0.11/junit-interface-0.11.jar:/home/jenkins/.m2/repository/org/apache/htrace/htrace-core/3.1.0-incubating/htrace-core-3.1.0-incubating.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-yarn-server-common/2.7.3/hadoop-yarn-server-common-2.7.3.jar:/home/jenkins/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar:/home/jenkins/.m2/repository/org/apache/httpcomponents/httpcore/4.4.10/httpcore-4.4.10.jar:/home/jenkins/.m2/repository/org/apache/curator/curator-framework/2.7.1/curator-framework-2.7.1.jar:/home/jenkins/.m2/repository/org/scalactic/scalactic_2.11/3.0.3/scalactic_2.11-3.0.3.jar:/home/jenkins/.m2/repository/org/apache/curator/curator-recipes/2.7.1/curator-recipes-2.7.1.jar:/home/jenkins/.m2/repository/com/thoughtworks/paranamer/paranamer/2.8/paranamer-2.8.jar:/home/jenkins/.m2/repository/org/apache/avro/avro/1.8.2/avro-1.8.2.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-compress/1.4.1/commons-compress-1.4.1.jar:/home/jenkins/.m2/repository/org/apache/directory/server/apacheds-kerberos-codec/2.0.0-M15/apacheds-kerberos-codec-2.0.0-M15.jar:/home/jenkins/.m2/repository/io/dropwizard/metrics/metrics-core/3.1.5/metrics-core-3.1.5.jar:/home/jenkins/workspace/spark-master-test-maven-hadoop-2.6/common/tags/target/scala-2.11/test-classes:/home/jenkins/.m2/repository/org/fusesource/leveldbjni/leveldbjni-all/1.8/leveldbjni-all-1.8.jar:/home/jenkins/.m2/repository/xml-apis/xml-apis/1.4.01/xml-apis-1.4.01.jar:/home/jenkins/.m2/repository/org/slf4j/slf4j-api/1.7.16/slf4j-api-1.7.16.jar:/home/jenkins/.m2/repository/org/tukaani/xz/1.5/xz-1.5.jar:/home/jenkins/.m2/repository/org/mortbay/jetty/jetty-util/6.1.26/jetty-util-6.1.26.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-crypto/1.0.0/commons-crypto-1.0.0.jar:/home/jenkins/.m2/repository/org/xerial/snappy/snappy-java/1.1.7.1/snappy-java-1.1.7.1.jar:/home/jenkins/.m2/repository/org/scala-sbt/test-interface/1.0/test-interface-1.0.jar:/home/jenkins/.m2/repository/commons-io/commons-io/2.4/commons-io-2.4.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-reflect/2.11.12/scala-reflect-2.11.12.jar:/home/jenkins/.m2/repository/org/hamcrest/hamcrest-core/1.3/hamcrest-core-1.3.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-lang3/3.5/commons-lang3-3.5.jar:/home/jenkins/.m2/repository/javax/servlet/jsp/jsp-api/2.1/jsp-api-2.1.jar:/home/jenkins/.m2/repository/org/codehaus/jackson/jackson-jaxrs/1.9.13/jackson-jaxrs-1.9.13.jar:/home/jenkins/.m2/repository/io/netty/netty/3.9.9.Final/netty-3.9.9.Final.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-auth/2.7.3/hadoop-auth-2.7.3.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-client/2.7.3/hadoop-client-2.7.3.jar:/home/jenkins/.m2/repository/xmlenc/xmlenc/0.52/xmlenc-0.52.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.7.3/hadoop-hdfs-2.7.3.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-shuffle/2.7.3/hadoop-mapreduce-client-shuffle-2.7.3.jar:/home/jenkins/.m2/repository/org/codehaus/jackson/jackson-core-asl/1.9.13/jackson-core-asl-1.9.13.jar:/home/jenkins/.m2/repository/org/apache/directory/api/api-util/1.0.0-M20/api-util-1.0.0-M20.jar:/home/jenkins/.m2/repository/com/google/code/gson/gson/2.2.4/gson-2.2.4.jar:/home/jenkins/.m2/repository/org/apache/directory/api/api-asn1-api/1.0.0-M20/api-asn1-api-1.0.0-M20.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.3/hadoop-mapreduce-client-core-2.7.3.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-xml_2.11/1.0.5/scala-xml_2.11-1.0.5.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-common/2.7.3/hadoop-common-2.7.3.jar:/home/jenkins/.m2/repository/com/google/code/findbugs/jsr305/1.3.9/jsr305-1.3.9.jar:/home/jenkins/.m2/repository/commons-net/commons-net/3.1/commons-net-3.1.jar:/home/jenkins/.m2/repository/commons-configuration/commons-configuration/1.6/commons-configuration-1.6.jar:/home/jenkins/.m2/repository/commons-digester/commons-digester/1.8/commons-digester-1.8.jar:/home/jenkins/workspace/spark-master-test-maven-hadoop-2.6/common/network-common/target/scala-2.11/classes:/home/jenkins/.m2/repository/org/scala-lang/scala-library/2.11.12/scala-library-2.11.12.jar:/home/jenkins/.m2/repository/org/apache/directory/server/apacheds-i18n/2.0.0-M15/apacheds-i18n-2.0.0-M15.jar:/home/jenkins/.m2/repository/commons-beanutils/commons-beanutils/1.7.0/commons-beanutils-1.7.0.jar:/home/jenkins/.m2/repository/javax/xml/bind/jaxb-api/2.2.2/jaxb-api-2.2.2.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.9.6/jackson-annotations-2.9.6.jar:/home/jenkins/.m2/repository/commons-lang/commons-lang/2.6/commons-lang-2.6.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-yarn-client/2.7.3/hadoop-yarn-client-2.7.3.jar:/home/jenkins/.m2/repository/org/apache/curator/curator-client/2.7.1/curator-client-2.7.1.jar:/home/jenkins/.m2/repository/org/apache/zookeeper/zookeeper/3.4.6/zookeeper-3.4.6.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar:/home/jenkins/.m2/repository/javax/activation/activation/1.1.1/activation-1.1.1.jar:/home/jenkins/.m2/repository/commons-cli/commons-cli/1.2/commons-cli-1.2.jar:/home/jenkins/.m2/repository/commons-codec/commons-codec/1.10/commons-codec-1.10.jar:/home/jenkins/.m2/repository/com/google/guava/guava/14.0.1/guava-14.0.1.jar:/home/jenkins/.m2/repository/xerces/xercesImpl/2.9.1/xercesImpl-2.9.1.jar:/home/jenkins/.m2/repository/org/slf4j/slf4j-log4j12/1.7.16/slf4j-log4j12-1.7.16.jar:/home/jenkins/.m2/repository/org/spark-project/spark/unused/1.0.0/unused-1.0.0.jar:/home/jenkins/workspace/spark-master-test-maven-hadoop-2.6/common/network-shuffle/target/scala-2.11/classes:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.9.6/jackson-core-2.9.6.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-annotations/2.7.3/hadoop-annotations-2.7.3.jar:/home/jenkins/.m2/repository/commons-beanutils/commons-beanutils-core/1.8.0/commons-beanutils-core-1.8.0.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-math3/3.4.1/commons-math3-3.4.1.jar:/home/jenkins/.m2/repository/commons-collections/commons-collections/3.2.2/commons-collections-3.2.2.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-common/2.7.3/hadoop-mapreduce-client-common-2.7.3.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-parser-combinators_2.11/1.1.0/scala-parser-combinators_2.11-1.1.0.jar:/home/jenkins/.m2/repository/javax/xml/stream/stax-api/1.0-2/stax-api-1.0-2.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-jobclient/2.7.3/hadoop-mapreduce-client-jobclient-2.7.3.jar:/home/jenkins/.m2/repository/junit/junit/4.12/junit-4.12.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-app/2.7.3/hadoop-mapreduce-client-app-2.7.3.jar:/home/jenkins/.m2/repository/org/codehaus/jackson/jackson-xc/1.9.13/jackson-xc-1.9.13.jar:/home/jenkins/.m2/repository/org/apache/httpcomponents/httpclient/4.5.6/httpclient-4.5.6.jar:/home/jenkins/workspace/spark-master-test-maven-hadoop-2.6/common/tags/target/scala-2.11/classes:/home/jenkins/.m2/repository/io/netty/netty-all/4.1.17.Final/netty-all-4.1.17.Final.jar:/home/jenkins/.m2/repository/commons-httpclient/commons-httpclient/3.1/commons-httpclient-3.1.jar
[INFO] 
[INFO] --- scala-maven-plugin:3.2.2:testCompile (scala-test-compile-first) @ spark-network-yarn_2.11 ---
[INFO] No sources to compile
[INFO] 
[INFO] --- maven-surefire-plugin:2.22.0:test (default-test) @ spark-network-yarn_2.11 ---
[INFO] 
[INFO] --- maven-surefire-plugin:2.22.0:test (test) @ spark-network-yarn_2.11 ---
[INFO] Skipping execution of surefire because it has already been run for this configuration
[INFO] 
[INFO] --- scalatest-maven-plugin:1.0:test (test) @ spark-network-yarn_2.11 ---
Discovery starting.
Discovery completed in 64 milliseconds.
Run starting. Expected test count is: 0
DiscoverySuite:
Run completed in 96 milliseconds.
Total number of tests run: 0
Suites: completed 1, aborted 0
Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
No tests were executed.
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project YARN
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project Mesos
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project Hive Thrift Server
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project Assembly
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Integration for Kafka 0.10
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Kafka 0.10+ Source for Structured Streaming
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Kinesis Integration
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project Examples
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Integration for Kafka 0.10 Assembly
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Avro
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project Kinesis Assembly
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Spark Project Parent POM 3.0.0-SNAPSHOT ............ SUCCESS [  2.576 s]
[INFO] Spark Project Tags ................................. SUCCESS [  1.868 s]
[INFO] Spark Project Sketch ............................... SUCCESS [ 16.127 s]
[INFO] Spark Project Local DB ............................. SUCCESS [  4.326 s]
[INFO] Spark Project Networking ........................... SUCCESS [ 42.998 s]
[INFO] Spark Project Shuffle Streaming Service ............ SUCCESS [  7.827 s]
[INFO] Spark Project Unsafe ............................... SUCCESS [  2.612 s]
[INFO] Spark Project Launcher ............................. SUCCESS [  4.480 s]
[INFO] Spark Project Core ................................. FAILURE [38:24 min]
[INFO] Spark Project ML Local Library ..................... SUCCESS [  4.875 s]
[INFO] Spark Project GraphX ............................... SKIPPED
[INFO] Spark Project Streaming ............................ SKIPPED
[INFO] Spark Project Catalyst ............................. SKIPPED
[INFO] Spark Project SQL .................................. SKIPPED
[INFO] Spark Project ML Library ........................... SKIPPED
[INFO] Spark Project Tools ................................ SUCCESS [  1.168 s]
[INFO] Spark Project Hive ................................. SKIPPED
[INFO] Spark Project REPL ................................. SKIPPED
[INFO] Spark Project YARN Shuffle Service ................. SUCCESS [  3.431 s]
[INFO] Spark Project YARN ................................. SKIPPED
[INFO] Spark Project Mesos ................................ SKIPPED
[INFO] Spark Project Hive Thrift Server ................... SKIPPED
[INFO] Spark Project Assembly ............................. SKIPPED
[INFO] Spark Integration for Kafka 0.10 ................... SKIPPED
[INFO] Kafka 0.10+ Source for Structured Streaming ........ SKIPPED
[INFO] Spark Kinesis Integration .......................... SKIPPED
[INFO] Spark Project Examples ............................. SKIPPED
[INFO] Spark Integration for Kafka 0.10 Assembly .......... SKIPPED
[INFO] Spark Avro ......................................... SKIPPED
[INFO] Spark Project Kinesis Assembly 3.0.0-SNAPSHOT ...... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 39:57 min
[INFO] Finished at: 2018-10-16T01:12:21-07:00
[INFO] ------------------------------------------------------------------------
[WARNING] The requested profile "hadoop-2.6" could not be activated because it does not exist.
[ERROR] Failed to execute goal org.scalatest:scalatest-maven-plugin:1.0:test (test) on project spark-core_2.11: There are test failures -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :spark-core_2.11
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Finished: FAILURE