FailedConsole Output

Skipping 1,348 KB.. Full Log
ntextSchedulerCreationSuite:
- bad-master
- local
- local-*
- local-n
- local-*-n-failures
- local-n-failures
- bad-local-n
- bad-local-n-failures
- local-default-parallelism
- local-cluster
SerializationDebuggerSuite:
- primitives, strings, and nulls
- primitive arrays
- non-primitive arrays
- serializable object
- nested arrays
- nested objects
- cycles (should not loop forever)
- root object not serializable
- array containing not serializable element
- object containing not serializable field
- externalizable class writing out not serializable object
- externalizable class writing out serializable objects
- object containing writeReplace() which returns not serializable object
- object containing writeReplace() which returns serializable object
- no infinite loop with writeReplace() which returns class of its own type
- object containing writeObject() and not serializable field
- object containing writeObject() and serializable field
- object of serializable subclass with more fields than superclass (SPARK-7180)
- crazy nested objects
- improveException
- improveException with error in debugger
LoggingSuite:
- spark-shell logging filter
NettyRpcHandlerSuite:
- receive
- connectionTerminated
SamplingUtilsSuite:
- reservoirSampleAndCount
- SPARK-18678 reservoirSampleAndCount with tiny input
- computeFraction
TimeStampedHashMapSuite:
- HashMap - basic test
- TimeStampedHashMap - basic test
- TimeStampedHashMap - threading safety test
- TimeStampedHashMap - clearing by timestamp
RandomSamplerSuite:
- utilities
- sanity check medianKSD against references
- bernoulli sampling
- bernoulli sampling without iterator
- bernoulli sampling with gap sampling optimization
- bernoulli sampling (without iterator) with gap sampling optimization
- bernoulli boundary cases
- bernoulli (without iterator) boundary cases
- bernoulli data types
- bernoulli clone
- bernoulli set seed
- replacement sampling
- replacement sampling without iterator
- replacement sampling with gap sampling
- replacement sampling (without iterator) with gap sampling
- replacement boundary cases
- replacement (without) boundary cases
- replacement data types
- replacement clone
- replacement set seed
- bernoulli partitioning sampling
- bernoulli partitioning sampling without iterator
- bernoulli partitioning boundary cases
- bernoulli partitioning (without iterator) boundary cases
- bernoulli partitioning data
- bernoulli partitioning clone
ChunkedByteBufferOutputStreamSuite:
- empty output
- write a single byte
- write a single near boundary
- write a single at boundary
- single chunk output
- single chunk output at boundary size
- multiple chunk output
- multiple chunk output at boundary size
ProcfsMetricsGetterSuite:
- testGetProcessInfo
GraphiteSinkSuite:
- GraphiteSink with default MetricsFilter
- GraphiteSink with regex MetricsFilter
SparkSubmitUtilsSuite:
- incorrect maven coordinate throws error
- create repo resolvers
- create additional resolvers
:: loading settings :: url = jar:file:/home/jenkins/.m2/repository/org/apache/ivy/ivy/2.4.0/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
- add dependencies works correctly
- excludes works correctly
- ivy path works correctly
- search for artifact at local repositories
- dependency not found throws RuntimeException
- neglects Spark and Spark's dependencies
- exclude dependencies end to end
:: loading settings :: file = /home/jenkins/workspace/spark-master-test-maven-hadoop-3.2-jdk-11/core/target/tmp/ivy-0b6593da-8366-4aad-99f4-3064ca794231/ivysettings.xml
- load ivy settings file
- SPARK-10878: test resolution files cleaned after resolving artifact
BasicEventFilterBuilderSuite:
- track live jobs
- track live executors
ImplicitOrderingSuite:
- basic inference of Orderings
TaskMetricsSuite:
- mutating values
- mutating shuffle read metrics values
- mutating shuffle write metrics values
- mutating input metrics values
- mutating output metrics values
- merging multiple shuffle read metrics
- additional accumulables
ExternalShuffleServiceSuite:
- groupByKey without compression
- shuffle non-zero block size
- shuffle serializer
- zero sized blocks
- zero sized blocks without kryo
- shuffle on mutable pairs
- sorting on mutable pairs
- cogroup using mutable pairs
- subtract mutable pairs
- sort with Java non serializable class - Kryo
- sort with Java non serializable class - Java
- shuffle with different compression settings (SPARK-3426)
- [SPARK-4085] rerun map stage if reduce stage cannot find its local shuffle file
- cannot find its local shuffle file if no execution of the stage and rerun shuffle
- metrics for shuffle without aggregation
- metrics for shuffle with aggregation
- multiple simultaneous attempts for one task (SPARK-8029)
- using external shuffle service
- SPARK-27651: read host local shuffle blocks from disk and avoid network remote fetches
- SPARK-25888: using external shuffle service fetching disk persisted blocks
ClosureCleanerSuite:
- closures inside an object
- closures inside a class
- closures inside a class with no default constructor
- closures that don't use fields of the outer class
- nested closures inside an object
- nested closures inside a class
- toplevel return statements in closures are identified at cleaning time
- return statements from named functions nested in closures don't raise exceptions
- user provided closures are actually cleaned
- createNullValue
UnpersistSuite:
- unpersist RDD
PeriodicRDDCheckpointerSuite:
- Persisting
- Checkpointing
TaskSetManagerSuite:
- TaskSet with no preferences
- multiple offers with no preferences
- skip unsatisfiable locality levels
- basic delay scheduling
- we do not need to delay scheduling when we only have noPref tasks in the queue
- delay scheduling with fallback
- delay scheduling with failed hosts
- task result lost
- repeated failures lead to task set abortion
- executors should be blacklisted after task failure, in spite of locality preferences
- new executors get added and lost
- Executors exit for reason unrelated to currently running tasks
- test RACK_LOCAL tasks
- do not emit warning when serialized task is small
- emit warning when serialized task is large
- Not serializable exception thrown if the task cannot be serialized
- abort the job if total size of results is too large
- [SPARK-13931] taskSetManager should not send Resubmitted tasks after being a zombie
- [SPARK-22074] Task killed by other attempt task should not be resubmitted
- speculative and noPref task should be scheduled after node-local
- node-local tasks should be scheduled right away when there are only node-local and no-preference tasks
- SPARK-4939: node-local tasks should be scheduled right after process-local tasks finished
- SPARK-4939: no-pref tasks should be scheduled after process-local tasks finished
- Ensure TaskSetManager is usable after addition of levels
- Test that locations with HDFSCacheTaskLocation are treated as PROCESS_LOCAL.
- Test TaskLocation for different host type.
- Kill other task attempts when one attempt belonging to the same task succeeds
- Killing speculative tasks does not count towards aborting the taskset
- SPARK-19868: DagScheduler only notified of taskEnd when state is ready
- SPARK-17894: Verify TaskSetManagers for different stage attempts have unique names
- don't update blacklist for shuffle-fetch failures, preemption, denied commits, or killed tasks
- update application blacklist for shuffle-fetch
- update blacklist before adding pending task to avoid race condition
- SPARK-21563 context's added jars shouldn't change mid-TaskSet
- SPARK-24677: Avoid NoSuchElementException from MedianHeap
- SPARK-24755 Executor loss can cause task to not be resubmitted
- SPARK-13343 speculative tasks that didn't commit shouldn't be marked as success
- SPARK-13704 Rack Resolution is done with a batch of de-duped hosts
- TaskSetManager allocate resource addresses from available resources
- SPARK-26755 Ensure that a speculative task is submitted only once for execution
- SPARK-26755 Ensure that a speculative task obeys original locality preferences
- SPARK-29976 when a speculation time threshold is provided, should speculative run the task even if there are not enough successful runs, total tasks: 1
- SPARK-29976: when the speculation time threshold is not provided,don't speculative run if there are not enough successful runs, total tasks: 1
- SPARK-29976 when a speculation time threshold is provided, should speculative run the task even if there are not enough successful runs, total tasks: 2
- SPARK-29976: when the speculation time threshold is not provided,don't speculative run if there are not enough successful runs, total tasks: 2
- SPARK-29976 when a speculation time threshold is provided, should not speculative if there are too many tasks in the stage even though time threshold is provided
- SPARK-29976 Regular speculation configs should still take effect even when a threshold is provided
- SPARK-30417 when spark.task.cpus is greater than spark.executor.cores due to standalone settings, speculate if there is only one task in the stage
- TaskOutputFileAlreadyExistException lead to task set abortion
- SPARK-30359: don't clean executorsPendingToRemove at the beginning of CoarseGrainedSchedulerBackend.reset
BlockManagerBasicStrategyReplicationSuite:
- get peers with addition and removal of block managers
- block replication - 2x replication
- block replication - 3x replication
- block replication - mixed between 1x to 5x
- block replication - off-heap
- block replication - 2x replication without peers
- block replication - replication failures
- block replication - addition and deletion of block managers
RDDOperationGraphSuite:
- Test simple cluster equals
ShuffleExternalSorterSuite:
- nested spill should be no-op
ChunkedByteBufferSuite:
- no chunks
- getChunks() duplicates chunks
- copy() does not affect original buffer's position
- writeFully() does not affect original buffer's position
- SPARK-24107: writeFully() write buffer which is larger than bufferWriteChunkSize
- toArray()
- toArray() throws UnsupportedOperationException if size exceeds 2GB
- toInputStream()
HistoryServerDiskManagerSuite:
- leasing space
- tracking active stores
- approximate size heuristic
PythonBroadcastSuite:
- PythonBroadcast can be serialized with Kryo (SPARK-4882)
KeyLockSuite:
- The same key should wait when its lock is held
- A different key should not be locked
NettyBlockTransferServiceSuite:
- can bind to a random port
- can bind to two random ports
- can bind to a specific port
- can bind to a specific port twice and the second increments
- SPARK-27637: test fetch block with executor dead
BasicSchedulerIntegrationSuite:
- super simple job
- multi-stage job
- job with fetch failure
- job failure after 4 attempts
JobWaiterSuite:
- call jobFailed multiple times
RDDBarrierSuite:
- create an RDDBarrier
- RDDBarrier mapPartitionsWithIndex
- create an RDDBarrier in the middle of a chain of RDDs
- RDDBarrier with shuffle
UninterruptibleThreadSuite:
- interrupt when runUninterruptibly is running
- interrupt before runUninterruptibly runs
- nested runUninterruptibly
- stress test
DriverSuite:
- driver should exit after finishing without cleanup (SPARK-530) !!! IGNORED !!!
CompactBufferSuite:
- empty buffer
- basic inserts
- adding sequences
- adding the same buffer to itself
MapStatusSuite:
- compressSize
- decompressSize
- MapStatus should never report non-empty blocks' sizes as 0
- large tasks should use org.apache.spark.scheduler.HighlyCompressedMapStatus
- HighlyCompressedMapStatus: estimated size should be the average non-empty block size
- SPARK-22540: ensure HighlyCompressedMapStatus calculates correct avgSize
- RoaringBitmap: runOptimize succeeded
- RoaringBitmap: runOptimize failed
- Blocks which are bigger than SHUFFLE_ACCURATE_BLOCK_THRESHOLD should not be underestimated.
- SPARK-21133 HighlyCompressedMapStatus#writeExternal throws NPE
BlockInfoManagerSuite:
- initial memory usage
- get non-existent block
- basic lockNewBlockForWriting
- lockNewBlockForWriting blocks while write lock is held, then returns false after release
- lockNewBlockForWriting blocks while write lock is held, then returns true after removal
- read locks are reentrant
- multiple tasks can hold read locks
- single task can hold write lock
- cannot grab a writer lock while already holding a write lock
- assertBlockIsLockedForWriting throws exception if block is not locked
- downgrade lock
- write lock will block readers
- read locks will block writer
- removing a non-existent block throws IllegalArgumentException
- removing a block without holding any locks throws IllegalStateException
- removing a block while holding only a read lock throws IllegalStateException
- removing a block causes blocked callers to receive None
- releaseAllLocksForTask releases write locks
StoragePageSuite:
- rddTable
- empty rddTable
- streamBlockStorageLevelDescriptionAndSize
- receiverBlockTables
- empty receiverBlockTables
TaskSchedulerImplSuite:
- Scheduler does not always schedule tasks on the same workers
- Scheduler correctly accounts for multiple CPUs per task
- Scheduler does not crash when tasks are not serializable
- concurrent attempts for the same stage only have one active taskset
- don't schedule more tasks after a taskset is zombie
- if a zombie attempt finishes, continue scheduling tasks for non-zombie attempts
- tasks are not re-scheduled while executor loss reason is pending
- scheduled tasks obey task and stage blacklists
- scheduled tasks obey node and executor blacklists
- abort stage when all executors are blacklisted and we cannot acquire new executor
- SPARK-22148 abort timer should kick in when task is completely blacklisted & no new executor can be acquired
- SPARK-22148 try to acquire a new executor when task is unschedulable with 1 executor
- SPARK-22148 abort timer should clear unschedulableTaskSetToExpiryTime for all TaskSets
- SPARK-22148 Ensure we don't abort the taskSet if we haven't been completely blacklisted
- Blacklisted node for entire task set prevents per-task blacklist checks: iteration 0
- Blacklisted node for entire task set prevents per-task blacklist checks: iteration 1
- Blacklisted node for entire task set prevents per-task blacklist checks: iteration 2
- Blacklisted node for entire task set prevents per-task blacklist checks: iteration 3
- Blacklisted node for entire task set prevents per-task blacklist checks: iteration 4
- Blacklisted node for entire task set prevents per-task blacklist checks: iteration 5
- Blacklisted node for entire task set prevents per-task blacklist checks: iteration 6
- Blacklisted node for entire task set prevents per-task blacklist checks: iteration 7
- Blacklisted node for entire task set prevents per-task blacklist checks: iteration 8
- Blacklisted node for entire task set prevents per-task blacklist checks: iteration 9
- Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 0
- Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 1
- Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 2
- Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 3
- Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 4
- Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 5
- Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 6
- Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 7
- Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 8
- Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 9
- abort stage if executor loss results in unschedulability from previously failed tasks
- don't abort if there is an executor available, though it hasn't had scheduled tasks yet
- SPARK-16106 locality levels updated if executor added to existing host
- scheduler checks for executors that can be expired from blacklist
- if an executor is lost then the state for its running tasks is cleaned up (SPARK-18553)
- if a task finishes with TaskState.LOST its executor is marked as dead
- Locality should be used for bulk offers even with delay scheduling off
- With delay scheduling off, tasks can be run at any locality level immediately
- TaskScheduler should throw IllegalArgumentException when schedulingMode is not supported
- don't schedule for a barrier taskSet if available slots are less than pending tasks
- schedule tasks for a barrier taskSet if all tasks can be launched together
- SPARK-29263: barrier TaskSet can't schedule when higher prio taskset takes the slots
- cancelTasks shall kill all the running tasks and fail the stage
- killAllTaskAttempts shall kill all the running tasks and not fail the stage
- mark taskset for a barrier stage as zombie in case a task fails
- Scheduler correctly accounts for GPUs per task
SparkConfSuite:
- Test byteString conversion
- Test timeString conversion
- loading from system properties
- initializing without loading defaults
- named set methods
- basic get and set
- basic getAllWithPrefix
- creating SparkContext without master and app name
- creating SparkContext without master
- creating SparkContext without app name
- creating SparkContext with both master and app name
- SparkContext property overriding
- nested property names
- Thread safeness - SPARK-5425
- register kryo classes through registerKryoClasses
- register kryo classes through registerKryoClasses and custom registrator
- register kryo classes through conf
- deprecated configs
- akka deprecated configs
- SPARK-13727
- SPARK-17240: SparkConf should be serializable (java)
- SPARK-17240: SparkConf should be serializable (kryo)
- encryption requires authentication
- spark.network.timeout should bigger than spark.executor.heartbeatInterval
- SPARK-26998: SSL configuration not needed on executors
- SPARK-27244 toDebugString redacts sensitive information
- SPARK-28355: Use Spark conf for threshold at which UDFs are compressed by broadcast
- SPARK-24337: getSizeAsKb with default throws an useful error message with key name
- SPARK-24337: getTimeAsMs throws an useful error message with key name
- SPARK-24337: getTimeAsSeconds throws an useful error message with key name
- SPARK-24337: getTimeAsSeconds with default throws an useful error message with key name
- SPARK-24337: getSizeAsBytes with default long throws an useful error message with key name
- SPARK-24337: getSizeAsMb throws an useful error message with key name
- SPARK-24337: getSizeAsGb throws an useful error message with key name
- SPARK-24337: getSizeAsBytes with default string throws an useful error message with key name
- SPARK-24337: getDouble throws an useful error message with key name
- SPARK-24337: getTimeAsMs with default throws an useful error message with key name
- SPARK-24337: getSizeAsBytes throws an useful error message with key name
- SPARK-24337: getSizeAsGb with default throws an useful error message with key name
- SPARK-24337: getInt throws an useful error message with key name
- SPARK-24337: getSizeAsMb with default throws an useful error message with key name
- SPARK-24337: getSizeAsKb throws an useful error message with key name
- SPARK-24337: getBoolean throws an useful error message with key name
- SPARK-24337: getLong throws an useful error message with key name
- get task resource requirement from config
- test task resource requirement with 0 amount
- Ensure that we can configure fractional resources for a task
- Non-task resources are never fractional
ShuffleBlockFetcherIteratorSuite:
- successful 3 local + 4 host local + 2 remote reads
- error during accessing host local dirs for executors
- fetch continuous blocks in batch successful 3 local + 4 host local + 2 remote reads
- fetch continuous blocks in batch respects maxBlocksInFlightPerAddress
- release current unexhausted buffer in case the task completes early
- fail all blocks if any of the remote request fails
- retry corrupt blocks
- big blocks are also checked for corruption
- ensure big blocks available as a concatenated stream can be read
- retry corrupt blocks (disabled)
- Blocks should be shuffled to disk when size of the request is above the threshold(maxReqSizeShuffleToMem).
- fail zero-size blocks
ConfigEntrySuite:
- conf entry: int
- conf entry: long
- conf entry: double
- conf entry: boolean
- conf entry: optional
- conf entry: fallback
- conf entry: time
- conf entry: bytes
- conf entry: regex
- conf entry: string seq
- conf entry: int seq
- conf entry: transformation
- conf entry: checkValue()
- conf entry: valid values check
- conf entry: conversion error
- default value handling is null-safe
- variable expansion of spark config entries
- conf entry : default function
- conf entry: alternative keys
- conf entry: prepend with default separator
- conf entry: prepend with custom separator
- conf entry: prepend with fallback
- conf entry: prepend should work only with string type
- onCreate
WorkerSuite:
- test isUseLocalNodeSSLConfig
- test maybeUpdateSSLSettings
- test clearing of finishedExecutors (small number of executors)
- test clearing of finishedExecutors (more executors)
- test clearing of finishedDrivers (small number of drivers)
- test clearing of finishedDrivers (more drivers)
- worker could be launched without any resources
- worker could load resources from resources file while launching
- worker could load resources from discovery script while launching
- worker could load resources from resources file and discovery script while launching
- Workers run on the same host should avoid resources conflict when coordinate is on
- Workers run on the same host should load resources naively when coordinate is off
- cleanup non-shuffle files after executor exits when config spark.storage.cleanupFilesAfterExecutorExit=true
- don't cleanup non-shuffle files after executor exits when config spark.storage.cleanupFilesAfterExecutorExit=false
- WorkDirCleanup cleans app dirs and shuffle metadata when spark.shuffle.service.db.enabled=true
- WorkdDirCleanup cleans only app dirs whenspark.shuffle.service.db.enabled=false
BlockManagerSuite:
- StorageLevel object caching
- BlockManagerId object caching
- BlockManagerId.isDriver() with DRIVER_IDENTIFIER (SPARK-27090)
- master + 1 manager interaction
- master + 2 managers interaction
- removing block
- removing rdd
- removing broadcast
- reregistration on heart beat
- reregistration on block update
- reregistration doesn't dead lock
- correct BlockResult returned from get() calls
- optimize a location order of blocks without topology information
- optimize a location order of blocks with topology information
- SPARK-9591: getRemoteBytes from another location when Exception throw
- SPARK-27622: avoid the network when block requested from same host, StorageLevel(disk, 1 replicas)
- SPARK-27622: avoid the network when block requested from same host, StorageLevel(disk, deserialized, 1 replicas)
- SPARK-27622: avoid the network when block requested from same host, StorageLevel(disk, deserialized, 2 replicas)
- SPARK-27622: as file is removed fall back to network fetch, StorageLevel(disk, 1 replicas), getRemoteValue()
- SPARK-27622: as file is removed fall back to network fetch, StorageLevel(disk, 1 replicas), getRemoteBytes()
- SPARK-27622: as file is removed fall back to network fetch, StorageLevel(disk, deserialized, 1 replicas), getRemoteValue()
- SPARK-27622: as file is removed fall back to network fetch, StorageLevel(disk, deserialized, 1 replicas), getRemoteBytes()
- SPARK-14252: getOrElseUpdate should still read from remote storage
- in-memory LRU storage
- in-memory LRU storage with serialization
- in-memory LRU storage with off-heap
- in-memory LRU for partitions of same RDD
- in-memory LRU for partitions of multiple RDDs
- on-disk storage (encryption = off)
- on-disk storage (encryption = on)
- disk and memory storage (encryption = off)
- disk and memory storage (encryption = on)
- disk and memory storage with getLocalBytes (encryption = off)
- disk and memory storage with getLocalBytes (encryption = on)
- disk and memory storage with serialization (encryption = off)
- disk and memory storage with serialization (encryption = on)
- disk and memory storage with serialization and getLocalBytes (encryption = off)
- disk and memory storage with serialization and getLocalBytes (encryption = on)
- disk and off-heap memory storage (encryption = off)
- disk and off-heap memory storage (encryption = on)
- disk and off-heap memory storage with getLocalBytes (encryption = off)
- disk and off-heap memory storage with getLocalBytes (encryption = on)
- LRU with mixed storage levels (encryption = off)
- LRU with mixed storage levels (encryption = on)
- in-memory LRU with streams (encryption = off)
- in-memory LRU with streams (encryption = on)
- LRU with mixed storage levels and streams (encryption = off)
- LRU with mixed storage levels and streams (encryption = on)
- negative byte values in ByteBufferInputStream
- overly large block
- block compression
- block store put failure
- test putBlockDataAsStream with caching (encryption = off)
- test putBlockDataAsStream with caching (encryption = on)
- test putBlockDataAsStream with caching, serialized (encryption = off)
- test putBlockDataAsStream with caching, serialized (encryption = on)
- test putBlockDataAsStream with caching on disk (encryption = off)
- test putBlockDataAsStream with caching on disk (encryption = on)
- turn off updated block statuses
- updated block statuses
- query block statuses
- get matching blocks
- SPARK-1194 regression: fix the same-RDD rule for cache replacement
- safely unroll blocks through putIterator (disk)
- read-locked blocks cannot be evicted from memory
- remove block if a read fails due to missing DiskStore files (SPARK-15736)
- SPARK-13328: refresh block locations (fetch should fail after hitting a threshold)
- SPARK-13328: refresh block locations (fetch should succeed after location refresh)
- SPARK-17484: block status is properly updated following an exception in put()
- SPARK-17484: master block locations are updated following an invalid remote block fetch
- SPARK-25888: serving of removed file not detected by shuffle service
- test sorting of block locations
- SPARK-20640: Shuffle registration timeout and maxAttempts conf are working
- fetch remote block to local disk if block size is larger than threshold
- query locations of blockIds
PythonRunnerSuite:
- format path
- format paths
SortShuffleWriterSuite:
- write empty iterator
- write with some records
CryptoStreamUtilsSuite:
- crypto configuration conversion
- shuffle encryption key length should be 128 by default
- create 256-bit key
- create key with invalid length
- serializer manager integration
- encryption key propagation to executors
- crypto stream wrappers
- error handling wrapper
StatsdSinkSuite:
- metrics StatsD sink with Counter
- metrics StatsD sink with Gauge
- metrics StatsD sink with Histogram
- metrics StatsD sink with Timer
FileCommitProtocolInstantiationSuite:
- Dynamic partitions require appropriate constructor
- Standard partitions work with classic constructor
- Three arg constructors have priority
- Three arg constructors have priority when dynamic
- The protocol must be of the correct class
- If there is no matching constructor, class hierarchy is irrelevant
CompletionIteratorSuite:
- basic test
- reference to sub iterator should not be available after completion
LauncherBackendSuite:
- local: launcher handle
- standalone/client: launcher handle
LogPageSuite:
- get logs simple
UnifiedMemoryManagerSuite:
- single task requesting on-heap execution memory
- two tasks requesting full on-heap execution memory
- two tasks cannot grow past 1 / N of on-heap execution memory
- tasks can block to get at least 1 / 2N of on-heap execution memory
- TaskMemoryManager.cleanUpAllAllocatedMemory
- tasks should not be granted a negative amount of execution memory
- off-heap execution allocations cannot exceed limit
- basic execution memory
- basic storage memory
- execution evicts storage
- execution memory requests smaller than free memory should evict storage (SPARK-12165)
- storage does not evict execution
- small heap
- insufficient executor memory
- execution can evict cached blocks when there are multiple active tasks (SPARK-12155)
- SPARK-15260: atomically resize memory pools
- not enough free memory in the storage pool --OFF_HEAP
UnsafeKryoSerializerSuite:
- SPARK-7392 configuration limits
- basic types
- pairs
- Scala data structures
- Bug: SPARK-10251
- ranges
- asJavaIterable
- custom registrator
- kryo with collect
- kryo with parallelize
- kryo with parallelize for specialized tuples
- kryo with parallelize for primitive arrays
- kryo with collect for specialized tuples
- kryo with SerializableHyperLogLog
- kryo with reduce
- kryo with fold
- kryo with nonexistent custom registrator should fail
- default class loader can be set by a different thread
- registration of HighlyCompressedMapStatus
- registration of TaskCommitMessage
- serialization buffer overflow reporting
- KryoOutputObjectOutputBridge.writeObject and KryoInputObjectInputBridge.readObject
- getAutoReset
- SPARK-25176 ClassCastException when writing a Map after previously reading a Map with different generic type
- instance reuse with autoReset = true, referenceTracking = true, usePool = true
- instance reuse with autoReset = true, referenceTracking = true, usePool = false
- instance reuse with autoReset = false, referenceTracking = true, usePool = true
- instance reuse with autoReset = false, referenceTracking = true, usePool = false
- instance reuse with autoReset = true, referenceTracking = false, usePool = true
- instance reuse with autoReset = true, referenceTracking = false, usePool = false
- instance reuse with autoReset = false, referenceTracking = false, usePool = true
- instance reuse with autoReset = false, referenceTracking = false, usePool = false
- SPARK-25839 KryoPool implementation works correctly in multi-threaded environment
- SPARK-27216: test RoaringBitmap ser/dser with Kryo
NettyRpcAddressSuite:
- toString
- toString for client mode
BitSetSuite:
- basic set and get
- 100% full bit set
- nextSetBit
- xor len(bitsetX) < len(bitsetY)
- xor len(bitsetX) > len(bitsetY)
- andNot len(bitsetX) < len(bitsetY)
- andNot len(bitsetX) > len(bitsetY)
- [gs]etUntil
AsyncRDDActionsSuite:
- countAsync
- collectAsync
- foreachAsync
- foreachPartitionAsync
- takeAsync
- async success handling
- async failure handling
- FutureAction result, infinite wait
- FutureAction result, finite wait
- FutureAction result, timeout
- SimpleFutureAction callback must not consume a thread while waiting
- ComplexFutureAction callback must not consume a thread while waiting
StagePageSuite:
- ApiHelper.COLUMN_TO_INDEX should match headers of the task table
BarrierStageOnSubmittedSuite:
- submit a barrier ResultStage that contains PartitionPruningRDD
- submit a barrier ShuffleMapStage that contains PartitionPruningRDD
- submit a barrier stage that doesn't contain PartitionPruningRDD
- submit a barrier stage with partial partitions
- submit a barrier stage with union()
- submit a barrier stage with coalesce()
- submit a barrier stage that contains an RDD that depends on multiple barrier RDDs
- submit a barrier stage with zip()
- submit a barrier ResultStage with dynamic resource allocation enabled
- submit a barrier ShuffleMapStage with dynamic resource allocation enabled
- submit a barrier ResultStage that requires more slots than current total under local mode
- submit a barrier ShuffleMapStage that requires more slots than current total under local mode
- submit a barrier ResultStage that requires more slots than current total under local-cluster mode
- submit a barrier ShuffleMapStage that requires more slots than current total under local-cluster mode
BlockManagerInfoSuite:
- broadcast block externalShuffleServiceEnabled=true
- broadcast block externalShuffleServiceEnabled=false
- RDD block with MEMORY_ONLY externalShuffleServiceEnabled=true
- RDD block with MEMORY_ONLY externalShuffleServiceEnabled=false
- RDD block with MEMORY_AND_DISK externalShuffleServiceEnabled=true
- RDD block with MEMORY_AND_DISK externalShuffleServiceEnabled=false
- RDD block with DISK_ONLY externalShuffleServiceEnabled=true
- RDD block with DISK_ONLY externalShuffleServiceEnabled=false
- update from MEMORY_ONLY to DISK_ONLY externalShuffleServiceEnabled=true
- update from MEMORY_ONLY to DISK_ONLY externalShuffleServiceEnabled=false
- using invalid StorageLevel externalShuffleServiceEnabled=true
- using invalid StorageLevel externalShuffleServiceEnabled=false
- remove block externalShuffleServiceEnabled=true
- remove block externalShuffleServiceEnabled=false
HistoryServerArgumentsSuite:
- No Arguments Parsing
- Properties File Arguments Parsing --properties-file
HttpSecurityFilterSuite:
- filter bad user input
- perform access control
- set security-related headers
- doAs impersonation
MetricsSystemSuite:
- MetricsSystem with default config
- MetricsSystem with sources add
- MetricsSystem with Driver instance
- MetricsSystem with Driver instance and spark.app.id is not set
- MetricsSystem with Driver instance and spark.executor.id is not set
- MetricsSystem with Executor instance
- MetricsSystem with Executor instance and spark.app.id is not set
- MetricsSystem with Executor instance and spark.executor.id is not set
- MetricsSystem with instance which is neither Driver nor Executor
- MetricsSystem with Executor instance, with custom namespace
- MetricsSystem with Executor instance, custom namespace which is not set
- MetricsSystem with Executor instance, custom namespace, spark.executor.id not set
- MetricsSystem with non-driver, non-executor instance with custom namespace
JobCancellationSuite:
- local mode, FIFO scheduler
- local mode, fair scheduler
- cluster mode, FIFO scheduler
- cluster mode, fair scheduler
- do not put partially executed partitions into cache
- job group
- inherited job group (SPARK-6629)
- job group with interruption
- task reaper kills JVM if killed tasks keep running for too long
- task reaper will not kill JVM if spark.task.killTimeout == -1
- two jobs sharing the same stage
- interruptible iterator of shuffle reader
PartitioningSuite:
- HashPartitioner equality
- RangePartitioner equality
- RangePartitioner getPartition
- RangePartitioner for keys that are not Comparable (but with Ordering)
- RangPartitioner.sketch
- RangePartitioner.determineBounds
- RangePartitioner should run only one job if data is roughly balanced
- RangePartitioner should work well on unbalanced data
- RangePartitioner should return a single partition for empty RDDs
- HashPartitioner not equal to RangePartitioner
- partitioner preservation
- partitioning Java arrays should fail
- zero-length partitions should be correctly handled
- Number of elements in RDD is less than number of partitions
- defaultPartitioner
- defaultPartitioner when defaultParallelism is set
SecurityManagerSuite:
- set security with conf
- set security with conf for groups
- set security with api
- set security with api for groups
- set security modify acls
- set security modify acls for groups
- set security admin acls
- set security admin acls for groups
- set security with * in acls
- set security with * in acls for groups
- security for groups default behavior
- missing secret authentication key
- secret authentication key
- use executor-specific secret file configuration.
- secret file must be defined in both driver and executor
- master yarn cannot use file mounted secrets
- master local cannot use file mounted secrets
- master local[*] cannot use file mounted secrets
- master local[1,2] cannot use file mounted secrets
- master mesos://localhost:8080 cannot use file mounted secrets
- secret key generation: master 'yarn'
- secret key generation: master 'local'
- secret key generation: master 'local[*]'
- secret key generation: master 'local[1, 2]'
- secret key generation: master 'k8s://127.0.0.1'
- secret key generation: master 'k8s://127.0.1.1'
- secret key generation: master 'local-cluster[2, 1, 1024]'
- secret key generation: master 'invalid'
UISuite:
- basic ui visibility !!! IGNORED !!!
- visibility at localhost:4040 !!! IGNORED !!!
- jetty selects different port under contention
- jetty with https selects different port under contention
- jetty binds to port 0 correctly
- jetty with https binds to port 0 correctly
- verify webUrl contains the scheme
- verify webUrl contains the port
- verify proxy rewrittenURI
- verify rewriting location header for reverse proxy
- add and remove handlers with custom user filter
- http -> https redirect applies to all URIs
- specify both http and https ports separately
- redirect with proxy server support
SSLOptionsSuite:
- test resolving property file as spark conf 
- test resolving property with defaults specified 
- test whether defaults can be overridden 
- variable substitution
- get password from Hadoop credential provider
SparkListenerWithClusterSuite:
- SparkListener sends executor added message
RollingEventLogFilesReaderSuite:
- Retrieve EventLogFileReader correctly
- get information, list event log files, zip log files - with codec None
- get information, list event log files, zip log files - with codec Some(lz4)
- get information, list event log files, zip log files - with codec Some(lzf)
- get information, list event log files, zip log files - with codec Some(snappy)
- get information, list event log files, zip log files - with codec Some(zstd)
- rolling event log files - codec None
- rolling event log files - codec Some(lz4)
- rolling event log files - codec Some(lzf)
- rolling event log files - codec Some(snappy)
- rolling event log files - codec Some(zstd)
ExecutorMonitorSuite:
- basic executor timeout
- SPARK-4951, SPARK-26927: handle out of order task start events
- track tasks running on executor
- use appropriate time out depending on whether blocks are stored
- keeps track of stored blocks for each rdd and split
- handle timeouts correctly with multiple executors
- SPARK-27677: don't track blocks stored on disk when using shuffle service
- track executors pending for removal
- shuffle block tracking
- SPARK-28839: Avoids NPE in context cleaner when shuffle service is on
- shuffle tracking with multiple executors and concurrent jobs
- SPARK-28455: avoid overflow in timeout calculation
InputOutputMetricsSuite:
- input metrics for old hadoop with coalesce
- input metrics with cache and coalesce
- input metrics for new Hadoop API with coalesce
- input metrics when reading text file
- input metrics on records read - simple
- input metrics on records read - more stages
- input metrics on records - New Hadoop API
- input metrics on records read with cache
- input read/write and shuffle read/write metrics all line up
- input metrics with interleaved reads
- output metrics on records written
- output metrics on records written - new Hadoop API
- output metrics when writing text file
- input metrics with old CombineFileInputFormat
- input metrics with new CombineFileInputFormat
- input metrics with old Hadoop API in different thread
- input metrics with new Hadoop API in different thread
OutputCommitCoordinatorIntegrationSuite:
- exception thrown in OutputCommitter.commitTask()
BasicEventFilterSuite:
- filter out events for finished jobs
- accept all events for block manager addition/removal on driver
- filter out events for dead executors
- other events should be left to other filters
StandaloneRestSubmitSuite:
- construct submit request
- create submission
- create submission with multiple masters
- create submission from main method
- kill submission
- request submission status
- create then kill
- create then request status
- create then kill then request status
- kill or request status before create
- good request paths
- good request paths, bad requests
- bad request paths
- server returns unknown fields
- client handles faulty server
- client does not send 'SPARK_ENV_LOADED' env var by default
- client does not send 'SPARK_HOME' env var by default
- client does not send 'SPARK_CONF_DIR' env var by default
- client includes mesos env vars
DriverLoggerSuite:
- driver logs are persisted locally and synced to dfs
OutputCommitCoordinatorSuite:
- Only one of two duplicate commit tasks should commit
- If commit fails, if task is retried it should not be locked, and will succeed.
- Job should not complete if all commits are denied
- Only authorized committer failures can clear the authorized committer lock (SPARK-6614)
- SPARK-19631: Do not allow failed attempts to be authorized for committing
- SPARK-24589: Differentiate tasks from different stage attempts
- SPARK-24589: Make sure stage state is cleaned up
SortShuffleSuite:
- groupByKey without compression
- shuffle non-zero block size
- shuffle serializer
- zero sized blocks
- zero sized blocks without kryo
- shuffle on mutable pairs
- sorting on mutable pairs
- cogroup using mutable pairs
- subtract mutable pairs
- sort with Java non serializable class - Kryo
- sort with Java non serializable class - Java
- shuffle with different compression settings (SPARK-3426)
- [SPARK-4085] rerun map stage if reduce stage cannot find its local shuffle file
- cannot find its local shuffle file if no execution of the stage and rerun shuffle
- metrics for shuffle without aggregation
- metrics for shuffle with aggregation
- multiple simultaneous attempts for one task (SPARK-8029)
- SortShuffleManager properly cleans up files for shuffles that use the serialized path
- SortShuffleManager properly cleans up files for shuffles that use the deserialized path
SumEvaluatorSuite:
- correct handling of count 1
- correct handling of count 0
- correct handling of NaN
- correct handling of > 1 values
- test count > 1
MapOutputTrackerSuite:
- master start and stop
- master register shuffle and fetch
- master register and unregister shuffle
- master register shuffle and unregister map output and fetch
- remote fetch
- remote fetch below max RPC message size
- min broadcast size exceeds max RPC message size
- getLocationsWithLargestOutputs with multiple outputs in same machine
- remote fetch using broadcast
- equally divide map statistics tasks
- zero-sized blocks should be excluded when getMapSizesByExecutorId
HadoopFSDelegationTokenProviderSuite:
- hadoopFSsToAccess should return defaultFS even if not configured
WholeTextFileInputFormatSuite:
- for small files minimum split size per node and per rack should be less than or equal to maximum split size.
BlockManagerProactiveReplicationSuite:
- get peers with addition and removal of block managers
- block replication - 2x replication
- block replication - 3x replication
- block replication - mixed between 1x to 5x
- block replication - off-heap
- block replication - 2x replication without peers
- block replication - replication failures
- block replication - addition and deletion of block managers
- proactive block replication - 2 replicas - 1 block manager deletions
- proactive block replication - 3 replicas - 2 block manager deletions
- proactive block replication - 4 replicas - 3 block manager deletions
- proactive block replication - 5 replicas - 4 block manager deletions
SparkListenerSuite:
- don't call sc.stop in listener
- basic creation and shutdown of LiveListenerBus
- bus.stop() waits for the event queue to completely drain
- metrics for dropped listener events
- basic creation of StageInfo
- basic creation of StageInfo with shuffle
- StageInfo with fewer tasks than partitions
- local metrics
- onTaskGettingResult() called when result fetched remotely
- onTaskGettingResult() not called when result sent directly
- onTaskEnd() should be called for all started tasks, even after job has been killed
- SparkListener moves on if a listener throws an exception
- registering listeners via spark.extraListeners
- add and remove listeners to/from LiveListenerBus queues
- interrupt within listener is handled correctly: throw interrupt
- interrupt within listener is handled correctly: set Thread interrupted
- SPARK-30285: Fix deadlock in AsyncEventQueue.removeListenerOnError: throw interrupt
- SPARK-30285: Fix deadlock in AsyncEventQueue.removeListenerOnError: set Thread interrupted
- event queue size can be configued through spark conf
VersionUtilsSuite:
- Parse Spark major version
- Parse Spark minor version
- Parse Spark major and minor versions
- Return short version number
SizeTrackerSuite:
- vector fixed size insertions
- vector variable size insertions
- map fixed size insertions
- map variable size insertions
- map updates
SortShuffleManagerSuite:
- supported shuffle dependencies for serialized shuffle
- unsupported shuffle dependencies for serialized shuffle
KryoSerializerAutoResetDisabledSuite:
- sort-shuffle with bypassMergeSort (SPARK-7873)
- calling deserialize() after deserializeStream()
- SPARK-25786: ByteBuffer.array -- UnsupportedOperationException
SparkTransportConfSuite:
- default value is get when neither role nor module is set
- module value is get when role is not set
- use correct configuration when both module and role configs are present
CompressionCodecSuite:
- default compression codec
- lz4 compression codec
- lz4 compression codec short form
- lz4 supports concatenation of serialized streams
- lzf compression codec
- lzf compression codec short form
- lzf supports concatenation of serialized streams
- snappy compression codec
- snappy compression codec short form
- snappy supports concatenation of serialized streams
- zstd compression codec
- zstd compression codec short form
- zstd supports concatenation of serialized zstd
- bad compression codec
ChunkedByteBufferFileRegionSuite:
- transferTo can stop and resume correctly
- transfer to with random limits
XORShiftRandomSuite:
- XORShift generates valid random numbers
- XORShift with zero seed
- hashSeed has random bits throughout
CoarseGrainedSchedulerBackendSuite:
- serialized task larger than max RPC message size
- compute max number of concurrent tasks can be launched
- compute max number of concurrent tasks can be launched when spark.task.cpus > 1
- compute max number of concurrent tasks can be launched when some executors are busy
- custom log url for Spark UI is applied
- extra resources from executor
AppendOnlyMapSuite:
- initialization
- object keys and values
- primitive keys and values
- null keys
- null values
- changeValue
- inserting in capacity-1 map
- destructive sort
ConfigReaderSuite:
- variable expansion
- circular references
- spark conf provider filters config keys
ThreadUtilsSuite:
- newDaemonSingleThreadExecutor
- newDaemonSingleThreadScheduledExecutor
- newDaemonCachedThreadPool
- sameThread
- runInNewThread
Exception in thread "test-ForkJoinPool-3-worker-3" Exception in thread "test-ForkJoinPool-3-worker-1" java.lang.InterruptedException: sleep interrupted
	at java.base/java.lang.Thread.sleep(Native Method)
	at org.apache.spark.util.ThreadUtilsSuite$$anon$3.$anonfun$run$1(ThreadUtilsSuite.scala:146)
	at org.apache.spark.util.ThreadUtilsSuite$$anon$3.$anonfun$run$1$adapted(ThreadUtilsSuite.scala:145)
	at org.apache.spark.util.ThreadUtils$.$anonfun$parmap$2(ThreadUtils.scala:357)
	at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
	at scala.util.Success.$anonfun$map$1(Try.scala:255)
- parmap should be interruptible
	at scala.util.Success.map(Try.scala:213)
	at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
	at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at java.base/java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1426)
	at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
	at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
	at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
	at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
	at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:177)
java.lang.InterruptedException: sleep interrupted
	at java.base/java.lang.Thread.sleep(Native Method)
	at org.apache.spark.util.ThreadUtilsSuite$$anon$3.$anonfun$run$1(ThreadUtilsSuite.scala:146)
	at org.apache.spark.util.ThreadUtilsSuite$$anon$3.$anonfun$run$1$adapted(ThreadUtilsSuite.scala:145)
	at org.apache.spark.util.ThreadUtils$.$anonfun$parmap$2(ThreadUtils.scala:357)
	at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
	at scala.util.Success.$anonfun$map$1(Try.scala:255)
	at scala.util.Success.map(Try.scala:213)
	at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
	at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at java.base/java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1426)
	at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
	at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
	at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
	at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
	at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:177)
SocketAuthHelperSuite:
- successful auth
- failed auth
RDDOperationScopeSuite:
- equals and hashCode
- getAllScopes
- json de/serialization
- withScope
- withScope with partial nesting
- withScope with multiple layers of nesting
ShuffleDriverComponentsSuite:
- test serialization of shuffle initialization conf to executors
KryoSerializerDistributedSuite:
- kryo objects are serialised consistently in different processes
OpenHashMapSuite:
- size for specialized, primitive value (int)
- initialization
- primitive value
- non-primitive value
- null keys
- null values
- changeValue
- inserting in capacity-1 map
- contains
- distinguish between the 0/0.0/0L and null
OpenHashSetSuite:
- size for specialized, primitive int
- primitive int
- primitive long
- primitive float
- primitive double
- non-primitive
- non-primitive set growth
- primitive set growth
- SPARK-18200 Support zero as an initial set size
- support for more than 12M items
AccumulatorSuite:
- accumulator serialization
- get accum
SparkContextInfoSuite:
- getPersistentRDDs only returns RDDs that are marked as cached
- getPersistentRDDs returns an immutable map
- getRDDStorageInfo only reports on RDDs that actually persist data
- call sites report correct locations
ExecutorAllocationManagerSuite:
- initialize dynamic allocation in SparkContext
- verify min/max executors
- starting state
- add executors
- executionAllocationRatio is correctly handled
- add executors capped by num pending tasks
- add executors when speculative tasks added
- properly handle task end events from completed stages
- cancel pending executors when no longer needed
- remove executors
- remove multiple executors
- Removing with various numExecutorsTarget condition
- interleaving add and remove
- starting/canceling add timer
- mock polling loop with no events
- mock polling loop add behavior
- mock polling loop remove behavior
- listeners trigger add executors correctly
- avoid ramp up when target < running executors
- avoid ramp down initial executors until first job is submitted
- avoid ramp down initial executors until idle executor is timeout
- get pending task number and related locality preference
- SPARK-8366: maxNumExecutorsNeeded should properly handle failed tasks
- reset the state of allocation manager
- SPARK-23365 Don't update target num executors when killing idle executors
- SPARK-26758 check executor target number after idle time out 
MemoryStoreSuite:
- reserve/release unroll memory
- safely unroll blocks
- safely unroll blocks through putIteratorAsValues
- safely unroll blocks through putIteratorAsBytes
- PartiallySerializedBlock.valuesIterator
- PartiallySerializedBlock.finishWritingToStream
- multiple unrolls by the same thread
- lazily create a big ByteBuffer to avoid OOM if it cannot be put into MemoryStore
- put a small ByteBuffer to MemoryStore
- SPARK-22083: Release all locks in evictBlocksToFreeSpace
ResourceProfileSuite:
- Default ResourceProfile
- Default ResourceProfile with app level resources specified
- Create ResourceProfile
- Test ExecutorResourceRequests memory helpers
- Test TaskResourceRequest fractional
SparkSubmitSuite:
- prints usage on empty input
- prints usage with only --help
- prints error with unrecognized options
- handle binary specified but not class
- handles arguments with --key=val
- handles arguments to user program
- handles arguments to user program with name collision
- print the right queue name
- SPARK-24241: do not fail fast if executor num is 0 when dynamic allocation is enabled
- specify deploy mode through configuration
- handles YARN cluster mode
- handles YARN client mode
- handles standalone cluster mode
- handles legacy standalone cluster mode
- handles standalone client mode
- handles mesos client mode
- handles k8s cluster mode
- automatically sets mainClass if primary resource is S3 JAR in client mode
- automatically sets mainClass if primary resource is S3 JAR in cluster mode
- error informatively when mainClass isn't set and S3 JAR doesn't exist
- handles confs with flag equivalents
- SPARK-21568 ConsoleProgressBar should be enabled only in shells
- launch simple application with spark-submit
- launch simple application with spark-submit with redaction
- includes jars passed in through --jars *** FAILED ***
  The code passed to failAfter did not complete within 1 minute. (SparkSubmitSuite.scala:1433)
- includes jars passed in through --packages
- includes jars passed through spark.jars.packages and spark.jars.repositories
- correctly builds R packages included in a jar with --packages !!! IGNORED !!!
- include an external JAR in SparkR !!! CANCELED !!!
  org.apache.spark.api.r.RUtils.isSparkRInstalled was false SparkR is not installed in this build. (SparkSubmitSuite.scala:706)
- resolves command line argument paths correctly
- ambiguous archive mapping results in error message
- resolves config paths correctly
- user classpath first in driver
- SPARK_CONF_DIR overrides spark-defaults.conf
- support glob path
- SPARK-27575: yarn confs should merge new value with existing value
- downloadFile - invalid url
- downloadFile - file doesn't exist
- downloadFile does not download local file
- download one file to local
- download list of files to local
- remove copies of application jar from classpath
- Avoid re-upload remote resources in yarn client mode
- download remote resource if it is not supported by yarn service
- avoid downloading remote resource if it is supported by yarn service
- force download from blacklisted schemes
- force download for all the schemes
- start SparkApplication without modifying system properties
- support --py-files/spark.submit.pyFiles in non pyspark application
- handles natural line delimiters in --properties-file and --conf uniformly
- get a Spark configuration from arguments
RPackageUtilsSuite:
- pick which jars to unpack using the manifest
- build an R package from a jar end to end
- jars that don't exist are skipped and print warning
- faulty R package shows documentation
- jars without manifest return false
- SparkR zipping works properly
TaskDescriptionSuite:
- encoding and then decoding a TaskDescription results in the same TaskDescription
MeanEvaluatorSuite:
- test count 0
- test count 1
- test count > 1
TopologyMapperSuite:
- File based Topology Mapper
ShuffleNettySuite:
- groupByKey without compression
- shuffle non-zero block size
- shuffle serializer
- zero sized blocks
- zero sized blocks without kryo
- shuffle on mutable pairs
- sorting on mutable pairs
- cogroup using mutable pairs
- subtract mutable pairs
- sort with Java non serializable class - Kryo
- sort with Java non serializable class - Java
- shuffle with different compression settings (SPARK-3426)
- [SPARK-4085] rerun map stage if reduce stage cannot find its local shuffle file
- cannot find its local shuffle file if no execution of the stage and rerun shuffle
- metrics for shuffle without aggregation
- metrics for shuffle with aggregation
- multiple simultaneous attempts for one task (SPARK-8029)
CountEvaluatorSuite:
- test count 0
- test count >= 1
SingleFileEventLogFileReaderSuite:
- Retrieve EventLogFileReader correctly
- get information, list event log files, zip log files - with codec None
- get information, list event log files, zip log files - with codec Some(lz4)
- get information, list event log files, zip log files - with codec Some(lzf)
- get information, list event log files, zip log files - with codec Some(snappy)
- get information, list event log files, zip log files - with codec Some(zstd)
KryoSerializerSuite:
- SPARK-7392 configuration limits
- basic types
- pairs
- Scala data structures
- Bug: SPARK-10251
- ranges
- asJavaIterable
- custom registrator
- kryo with collect
- kryo with parallelize
- kryo with parallelize for specialized tuples
- kryo with parallelize for primitive arrays
- kryo with collect for specialized tuples
- kryo with SerializableHyperLogLog
- kryo with reduce
- kryo with fold
- kryo with nonexistent custom registrator should fail
- default class loader can be set by a different thread
- registration of HighlyCompressedMapStatus
- registration of TaskCommitMessage
- serialization buffer overflow reporting
- KryoOutputObjectOutputBridge.writeObject and KryoInputObjectInputBridge.readObject
- getAutoReset
- SPARK-25176 ClassCastException when writing a Map after previously reading a Map with different generic type
- instance reuse with autoReset = true, referenceTracking = true, usePool = true
- instance reuse with autoReset = true, referenceTracking = true, usePool = false
- instance reuse with autoReset = false, referenceTracking = true, usePool = true
- instance reuse with autoReset = false, referenceTracking = true, usePool = false
- instance reuse with autoReset = true, referenceTracking = false, usePool = true
- instance reuse with autoReset = true, referenceTracking = false, usePool = false
- instance reuse with autoReset = false, referenceTracking = false, usePool = true
- instance reuse with autoReset = false, referenceTracking = false, usePool = false
- SPARK-25839 KryoPool implementation works correctly in multi-threaded environment
- SPARK-27216: test RoaringBitmap ser/dser with Kryo
BlacklistTrackerSuite:
- executors can be blacklisted with only a few failures per stage
- executors aren't blacklisted as a result of tasks in failed task sets
- stage blacklist updates correctly on stage success
- stage blacklist updates correctly on stage failure
- blacklisted executors and nodes get recovered with time
- blacklist can handle lost executors
- task failures expire with time
- task failure timeout works as expected for long-running tasksets
- only blacklist nodes for the application when enough executors have failed on that specific host
- blacklist still respects legacy configs
- check blacklist configuration invariants
- blacklisting kills executors, configured by BLACKLIST_KILL_ENABLED
- fetch failure blacklisting kills executors, configured by BLACKLIST_KILL_ENABLED
FailureSuite:
- failure in a single-stage job
- failure in a two-stage job
- failure in a map stage
- failure because task results are not serializable
- failure because task closure is not serializable
- managed memory leak error should not mask other failures (SPARK-9266
- last failure cause is sent back to driver
- failure cause stacktrace is sent back to driver if exception is not serializable
- failure cause stacktrace is sent back to driver if exception is not deserializable
- failure in tasks in a submitMapStage
- failure because cached RDD partitions are missing from DiskStore (SPARK-15736)
- SPARK-16304: Link error should not crash executor
PartitionwiseSampledRDDSuite:
- seed distribution
- concurrency
JdbcRDDSuite:
- basic functionality
- large id overflow
FileSuite:
- text files
- text files (compressed)
- text files do not allow null rows
- SequenceFiles
- SequenceFile (compressed)
- SequenceFile with writable key
- SequenceFile with writable value
- SequenceFile with writable key and value
- implicit conversions in reading SequenceFiles
- object files of ints
- object files of complex types
- object files of classes from a JAR
- write SequenceFile using new Hadoop API
- read SequenceFile using new Hadoop API
- binary file input as byte array
- portabledatastream caching tests
- portabledatastream persist disk storage
- portabledatastream flatmap tests
- SPARK-22357 test binaryFiles minPartitions
- minimum split size per node and per rack should be less than or equal to maxSplitSize
- fixed record length binary file as byte array
- negative binary record length should raise an exception
- file caching
- prevent user from overwriting the empty directory (old Hadoop API)
- prevent user from overwriting the non-empty directory (old Hadoop API)
- allow user to disable the output directory existence checking (old Hadoop API)
- prevent user from overwriting the empty directory (new Hadoop API)
- prevent user from overwriting the non-empty directory (new Hadoop API)
- allow user to disable the output directory existence checking (new Hadoop API
- save Hadoop Dataset through old Hadoop API
- save Hadoop Dataset through new Hadoop API
- Get input files via old Hadoop API
- Get input files via new Hadoop API
- spark.files.ignoreCorruptFiles should work both HadoopRDD and NewHadoopRDD
- spark.hadoopRDD.ignoreEmptySplits work correctly (old Hadoop API)
- spark.hadoopRDD.ignoreEmptySplits work correctly (new Hadoop API)
- spark.files.ignoreMissingFiles should work both HadoopRDD and NewHadoopRDD
- SPARK-25100: Support commit tasks when Kyro registration is required
ShuffleOldFetchProtocolSuite:
- groupByKey without compression
- shuffle non-zero block size
- shuffle serializer
- zero sized blocks
- zero sized blocks without kryo
- shuffle on mutable pairs
- sorting on mutable pairs
- cogroup using mutable pairs
- subtract mutable pairs
- sort with Java non serializable class - Kryo
- sort with Java non serializable class - Java
- shuffle with different compression settings (SPARK-3426)
- [SPARK-4085] rerun map stage if reduce stage cannot find its local shuffle file
- cannot find its local shuffle file if no execution of the stage and rerun shuffle
- metrics for shuffle without aggregation
- metrics for shuffle with aggregation
- multiple simultaneous attempts for one task (SPARK-8029)
SparkContextSuite:
- Only one SparkContext may be active at a time
- Can still construct a new SparkContext after failing to construct a previous one
- Test getOrCreate
- BytesWritable implicit conversion is correct
- basic case for addFile and listFiles
- add and list jar files
- add FS jar files not exists
- SPARK-17650: malformed url's throw exceptions before bricking Executors
- addFile recursive works
- SPARK-30126: addFile when file path contains spaces with recursive works
- SPARK-30126: addFile when file path contains spaces without recursive works
- addFile recursive can't add directories by default
- cannot call addFile with different paths that have the same filename
- addJar can be called twice with same file in local-mode (SPARK-16787)
- addFile can be called twice with same file in local-mode (SPARK-16787)
- addJar can be called twice with same file in non-local-mode (SPARK-16787)
- addFile can be called twice with same file in non-local-mode (SPARK-16787)
- SPARK-30126: add jar when path contains spaces
- add jar with invalid path
- SPARK-22585 addJar argument without scheme is interpreted literally without url decoding
- Cancelling job group should not cause SparkContext to shutdown (SPARK-6414)
- Comma separated paths for newAPIHadoopFile/wholeTextFiles/binaryFiles (SPARK-7155)
- Default path for file based RDDs is properly set (SPARK-12517)
- calling multiple sc.stop() must not throw any exception
- No exception when both num-executors and dynamic allocation set.
- localProperties are inherited by spawned threads.
- localProperties do not cross-talk between threads.
- log level case-insensitive and reset log level
- register and deregister Spark listener from SparkContext
- Cancelling stages/jobs with custom reasons.
- client mode with a k8s master url
- Killing tasks that raise interrupted exception on cancel
- Killing tasks that raise runtime exception on cancel
java.lang.Throwable
	at org.apache.spark.DebugFilesystem$.addOpenStream(DebugFilesystem.scala:35)
	at org.apache.spark.DebugFilesystem.open(DebugFilesystem.scala:69)
	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:899)
	at org.apache.spark.SparkContextSuite.$anonfun$new$67(SparkContextSuite.scala:683)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:149)
	at org.scalatest.FunSuiteLike.invokeWithFixture$1(FunSuiteLike.scala:184)
	at org.scalatest.FunSuiteLike.$anonfun$runTest$1(FunSuiteLike.scala:196)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:286)
	at org.scalatest.FunSuiteLike.runTest(FunSuiteLike.scala:196)
	at org.scalatest.FunSuiteLike.runTest$(FunSuiteLike.scala:178)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:56)
- SPARK-19446: DebugFilesystem.assertNoOpenStreams should report open streams to help debugging
	at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:221)
	at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:214)
	at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:56)
	at org.scalatest.FunSuiteLike.$anonfun$runTests$1(FunSuiteLike.scala:229)
	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:393)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:381)
	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:376)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:458)
	at org.scalatest.FunSuiteLike.runTests(FunSuiteLike.scala:229)
	at org.scalatest.FunSuiteLike.runTests$(FunSuiteLike.scala:228)
	at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
	at org.scalatest.Suite.run(Suite.scala:1124)
	at org.scalatest.Suite.run$(Suite.scala:1106)
	at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
	at org.scalatest.FunSuiteLike.$anonfun$run$1(FunSuiteLike.scala:233)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:518)
	at org.scalatest.FunSuiteLike.run(FunSuiteLike.scala:233)
	at org.scalatest.FunSuiteLike.run$(FunSuiteLike.scala:232)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:56)
	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:56)
	at org.scalatest.Suite.callExecuteOnSuite$1(Suite.scala:1187)
	at org.scalatest.Suite.$anonfun$runNestedSuites$1(Suite.scala:1234)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at org.scalatest.Suite.runNestedSuites(Suite.scala:1232)
	at org.scalatest.Suite.runNestedSuites$(Suite.scala:1166)
	at org.scalatest.tools.DiscoverySuite.runNestedSuites(DiscoverySuite.scala:30)
	at org.scalatest.Suite.run(Suite.scala:1121)
	at org.scalatest.Suite.run$(Suite.scala:1106)
	at org.scalatest.tools.DiscoverySuite.run(DiscoverySuite.scala:30)
	at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:45)
	at org.scalatest.tools.Runner$.$anonfun$doRunRunRunDaDoRunRun$13(Runner.scala:1349)
	at org.scalatest.tools.Runner$.$anonfun$doRunRunRunDaDoRunRun$13$adapted(Runner.scala:1343)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.tools.Runner$.doRunRunRunDaDoRunRun(Runner.scala:1343)
	at org.scalatest.tools.Runner$.$anonfun$runOptionallyWithPassFailReporter$24(Runner.scala:1033)
	at org.scalatest.tools.Runner$.$anonfun$runOptionallyWithPassFailReporter$24$adapted(Runner.scala:1011)
	at org.scalatest.tools.Runner$.withClassLoaderAndDispatchReporter(Runner.scala:1509)
	at org.scalatest.tools.Runner$.runOptionallyWithPassFailReporter(Runner.scala:1011)
	at org.scalatest.tools.Runner$.main(Runner.scala:827)
	at org.scalatest.tools.Runner.main(Runner.scala)
- support barrier execution mode under local mode
- support barrier execution mode under local-cluster mode
- cancel zombie tasks in a result stage when the job finishes
- Avoid setting spark.task.cpus unreasonably (SPARK-27192)
- test driver discovery under local-cluster mode
- test gpu driver resource files and discovery under local-cluster mode
- Test parsing resources task configs with missing executor config
- Test parsing resources executor config < task requirements
- Parse resources executor config not the same multiple numbers of the task requirements
- Parse resources executor config cpus not limiting resource
- test resource scheduling under local-cluster mode
SourceConfigSuite:
- Test configuration for adding static sources registration
- Test configuration for skipping static sources registration
- Test configuration for adding ExecutorMetrics source registration
- Test configuration for skipping ExecutorMetrics source registration
DiskBlockObjectWriterSuite:
- verify write metrics
- verify write metrics on revert
- Reopening a closed block writer
- calling revertPartialWritesAndClose() on a partial write should truncate up to commit
- calling revertPartialWritesAndClose() after commit() should have no effect
- calling revertPartialWritesAndClose() on a closed block writer should have no effect
- commit() and close() should be idempotent
- revertPartialWritesAndClose() should be idempotent
- commit() and close() without ever opening or writing
ThreadingSuite:
- accessing SparkContext form a different thread
- accessing SparkContext form multiple threads
- accessing multi-threaded SparkContext form multiple threads
- parallel job execution
- set local properties in different thread
- set and get local properties in parent-children thread
- mutation in parent local property does not affect child (SPARK-10563)
PythonRDDSuite:
- Writing large strings to the worker
- Handle nulls gracefully
- python server error handling
- mapToConf should not load defaults
- SparkContext's hadoop configuration should be respected in PythonRDD
ShuffleDependencySuite:
- key, value, and combiner classes correct in shuffle dependency without aggregation
- key, value, and combiner classes available in shuffle dependency with aggregation
- combineByKey null combiner class tag handled correctly
ResourceInformationSuite:
- ResourceInformation.parseJson for valid JSON
- ResourceInformation.equals/hashCode
JVMObjectTrackerSuite:
- JVMObjectId does not take null IDs
- JVMObjectTracker
ClosureCleanerSuite2:
- clean basic serializable closures
- clean basic non-serializable closures
- clean basic nested serializable closures
- clean basic nested non-serializable closures
- clean complicated nested serializable closures
- clean complicated nested non-serializable closures
PartitionPruningRDDSuite:
- Pruned Partitions inherit locality prefs correctly
- Pruned Partitions can be unioned 
SimpleDateParamSuite:
- date parsing
StorageSuite:
- storage status add non-RDD blocks
- storage status add RDD blocks
- storage status getBlock
- storage status memUsed, diskUsed, externalBlockStoreUsed
- storage memUsed, diskUsed with on-heap and off-heap blocks
- old SparkListenerBlockManagerAdded event compatible
CausedBySuite:
- For an error without a cause, should return the error
- For an error with a cause, should return the cause of the error
- For an error with a cause that itself has a cause, return the root cause
JavaUtilsSuite:
- containsKey implementation without iteratively entrySet call
EventLogFileCompactorSuite:
- No event log files
- No compact file, less origin files available than max files to retain
- No compact file, more origin files available than max files to retain
- compact file exists, less origin files available than max files to retain
- compact file exists, number of origin files are same as max files to retain
- compact file exists, more origin files available than max files to retain
- events for finished job are dropped in new compact file
- Don't compact file if score is lower than threshold
- rewrite files with test filters
FileAppenderSuite:
- basic file appender
- rolling file appender - time-based rolling
- rolling file appender - time-based rolling (compressed)
- rolling file appender - size-based rolling
- rolling file appender - size-based rolling (compressed)
- rolling file appender - cleaning
- file appender selection
- file appender async close stream abruptly
- file appender async close stream gracefully
BypassMergeSortShuffleWriterSuite:
- write empty iterator
- write with some empty partitions - transferTo true
- write with some empty partitions - transferTo false
- only generate temp shuffle file for non-empty partition
- cleanup of intermediate files after errors
DistributedSuite:
- task throws not serializable exception
- local-cluster format
- simple groupByKey
- groupByKey where map output sizes exceed maxMbInFlight
- accumulators
- broadcast variables
- repeatedly failing task
- repeatedly failing task that crashes JVM
- repeatedly failing task that crashes JVM with a zero exit code (SPARK-16925)
- caching (encryption = off)
- caching (encryption = on)
- caching on disk (encryption = off)
- caching on disk (encryption = on)
- caching in memory, replicated (encryption = off)
- caching in memory, replicated (encryption = off) (with replication as stream)
- caching in memory, replicated (encryption = on)
- caching in memory, replicated (encryption = on) (with replication as stream)
- caching in memory, serialized, replicated (encryption = off)
- caching in memory, serialized, replicated (encryption = off) (with replication as stream)
- caching in memory, serialized, replicated (encryption = on)
- caching in memory, serialized, replicated (encryption = on) (with replication as stream)
- caching on disk, replicated (encryption = off)
- caching on disk, replicated (encryption = off) (with replication as stream)
- caching on disk, replicated (encryption = on)
- caching on disk, replicated (encryption = on) (with replication as stream)
- caching in memory and disk, replicated (encryption = off)
- caching in memory and disk, replicated (encryption = off) (with replication as stream)
- caching in memory and disk, replicated (encryption = on)
- caching in memory and disk, replicated (encryption = on) (with replication as stream)
- caching in memory and disk, serialized, replicated (encryption = off)
- caching in memory and disk, serialized, replicated (encryption = off) (with replication as stream)
- caching in memory and disk, serialized, replicated (encryption = on)
- caching in memory and disk, serialized, replicated (encryption = on) (with replication as stream)
- compute without caching when no partitions fit in memory
- compute when only some partitions fit in memory
- passing environment variables to cluster
- recover from node failures
- recover from repeated node failures during shuffle-map
- recover from repeated node failures during shuffle-reduce
- recover from node failures with replication
- unpersist RDDs
- reference partitions inside a task
FutureActionSuite:
- simple async action
- complex async action
LocalCheckpointSuite:
- transform storage level
- basic lineage truncation
- basic lineage truncation - caching before checkpointing
- basic lineage truncation - caching after checkpointing
- indirect lineage truncation
- indirect lineage truncation - caching before checkpointing
- indirect lineage truncation - caching after checkpointing
- checkpoint without draining iterator
- checkpoint without draining iterator - caching before checkpointing
- checkpoint without draining iterator - caching after checkpointing
- checkpoint blocks exist
- checkpoint blocks exist - caching before checkpointing
- checkpoint blocks exist - caching after checkpointing
- missing checkpoint block fails with informative message
SingleEventLogFileWriterSuite:
- create EventLogFileWriter with enable/disable rolling
- initialize, write, stop - with codec None
- initialize, write, stop - with codec Some(lz4)
- initialize, write, stop - with codec Some(lzf)
- initialize, write, stop - with codec Some(snappy)
- initialize, write, stop - with codec Some(zstd)
- spark.eventLog.compression.codec overrides spark.io.compression.codec
- Log overwriting
- Event log name
WorkerWatcherSuite:
- WorkerWatcher shuts down on valid disassociation
- WorkerWatcher stays alive on invalid disassociation
ExternalShuffleServiceDbSuite:
- Recover shuffle data with spark.shuffle.service.db.enabled=true after shuffle service restart
- Can't recover shuffle data with spark.shuffle.service.db.enabled=false after shuffle service restart
CoarseGrainedExecutorBackendSuite:
- parsing no resources
- parsing one resource
- parsing multiple resources
- error checking parsing resources and executor and task configs
- executor resource found less than required
- use resource discovery
- use resource discovery and allocated file option
- track allocated resources by taskId
- SPARK-24203 when bindAddress is not set, it defaults to hostname
- SPARK-24203 when bindAddress is different, it does not default to hostname
NettyRpcEnvSuite:
- send a message locally
- send a message remotely
- send a RpcEndpointRef
- ask a message locally
- ask a message remotely
- ask a message timeout
- ask a message abort
- onStart and onStop
- onError: error in onStart
- onError: error in onStop
- onError: error in receive
- self: call in onStart
- self: call in receive
- self: call in onStop
- call receive in sequence
- stop(RpcEndpointRef) reentrant
- sendWithReply
- sendWithReply: remotely
- sendWithReply: error
- sendWithReply: remotely error
- network events in sever RpcEnv when another RpcEnv is in server mode
- network events in sever RpcEnv when another RpcEnv is in client mode
- network events in client RpcEnv when another RpcEnv is in server mode
- sendWithReply: unserializable error
- port conflict
- send with authentication
- send with SASL encryption
- send with AES encryption
- ask with authentication
- ask with SASL encryption
- ask with AES encryption
- construct RpcTimeout with conf property
- ask a message timeout on Future using RpcTimeout
- file server
- SPARK-14699: RpcEnv.shutdown should not fire onDisconnected events
- isolated endpoints
- non-existent endpoint
- advertise address different from bind address
- RequestMessage serialization
Exception in thread "dispatcher-event-loop-0" java.lang.StackOverflowError
	at org.apache.spark.rpc.netty.NettyRpcEnvSuite$$anon$1$$anonfun$receiveAndReply$1.applyOrElse(NettyRpcEnvSuite.scala:113)
	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:203)
	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:834)
Exception in thread "dispatcher-event-loop-1" java.lang.StackOverflowError
	at org.apache.spark.rpc.netty.NettyRpcEnvSuite$$anon$1$$anonfun$receiveAndReply$1.applyOrElse(NettyRpcEnvSuite.scala:113)
	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:203)
	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:834)
- StackOverflowError should be sent back and Dispatcher should survive
PagedTableSuite:
- pageNavigation
ClientSuite:
- correctly validates driver jar URL's
BlockIdSuite:
- test-bad-deserialization
- rdd
- shuffle
- shuffle batch
- shuffle data
- shuffle index
- broadcast
- taskresult
- stream
- temp local
- temp shuffle
- test
PartiallyUnrolledIteratorSuite:
- join two iterators
KryoSerializerResizableOutputSuite:
- kryo without resizable output buffer should fail on large array
- kryo with resizable output buffer should succeed on large array
BlockManagerReplicationSuite:
- get peers with addition and removal of block managers
- block replication - 2x replication
- block replication - 3x replication
- block replication - mixed between 1x to 5x
- block replication - off-heap
- block replication - 2x replication without peers
- block replication - replication failures
- block replication - addition and deletion of block managers
BarrierTaskContextSuite:
- global sync by barrier() call
- support multiple barrier() call within a single task
- throw exception on barrier() call timeout
- throw exception if barrier() call doesn't happen on every task
- throw exception if the number of barrier() calls are not the same on every task
- barrier task killed, no interrupt
- barrier task killed, interrupt
BlockStoreShuffleReaderSuite:
- read() releases resources on completion
WholeTextFileRecordReaderSuite:
- Correctness of WholeTextFileRecordReader.
- Correctness of WholeTextFileRecordReader with GzipCodec.
SubmitRestProtocolSuite:
- validate
- request to and from JSON
- response to and from JSON
- CreateSubmissionRequest
- CreateSubmissionResponse
- KillSubmissionResponse
- SubmissionStatusResponse
- ErrorResponse
FlatmapIteratorSuite:
- Flatmap Iterator to Disk
- Flatmap Iterator to Memory
- Serializer Reset
SizeEstimatorSuite:
- simple classes
- primitive wrapper objects
- class field blocks rounding
- strings
- primitive arrays
- object arrays
- 32-bit arch
- 64-bit arch with no compressed oops
- class field blocks rounding on 64-bit VM without useCompressedOops
- check 64-bit detection for s390x arch
- SizeEstimation can provide the estimated size
ElementTrackingStoreSuite:
- asynchronous tracking single-fire
- tracking for multiple types
PipedRDDSuite:
- basic pipe
- basic pipe with tokenization
- failure in iterating over pipe input
- stdin writer thread should be exited when task is finished
- advanced pipe
- pipe with empty partition
- pipe with env variable
- pipe with process which cannot be launched due to bad command
cat: nonexistent_file: No such file or directory
cat: nonexistent_file: No such file or directory
- pipe with process which is launched but fails with non-zero exit status
- basic pipe with separate working directory
- test pipe exports map_input_file
- test pipe exports mapreduce_map_input_file
AccumulatorV2Suite:
- LongAccumulator add/avg/sum/count/isZero
- DoubleAccumulator add/avg/sum/count/isZero
- ListAccumulator
InboxSuite:
- post
- post: with reply
- post: multiple threads
- post: Associated
- post: Disassociated
- post: AssociationError
MasterWebUISuite:
- kill application
- kill driver
RadixSortSuite:
- radix support for unsigned binary data asc nulls first
- sort unsigned binary data asc nulls first
- sort key prefix unsigned binary data asc nulls first
- fuzz test unsigned binary data asc nulls first with random bitmasks
- fuzz test key prefix unsigned binary data asc nulls first with random bitmasks
- radix support for unsigned binary data asc nulls last
- sort unsigned binary data asc nulls last
- sort key prefix unsigned binary data asc nulls last
- fuzz test unsigned binary data asc nulls last with random bitmasks
- fuzz test key prefix unsigned binary data asc nulls last with random bitmasks
- radix support for unsigned binary data desc nulls last
- sort unsigned binary data desc nulls last
- sort key prefix unsigned binary data desc nulls last
- fuzz test unsigned binary data desc nulls last with random bitmasks
- fuzz test key prefix unsigned binary data desc nulls last with random bitmasks
- radix support for unsigned binary data desc nulls first
- sort unsigned binary data desc nulls first
- sort key prefix unsigned binary data desc nulls first
- fuzz test unsigned binary data desc nulls first with random bitmasks
- fuzz test key prefix unsigned binary data desc nulls first with random bitmasks
- radix support for twos complement asc nulls first
- sort twos complement asc nulls first
- sort key prefix twos complement asc nulls first
- fuzz test twos complement asc nulls first with random bitmasks
- fuzz test key prefix twos complement asc nulls first with random bitmasks
- radix support for twos complement asc nulls last
- sort twos complement asc nulls last
- sort key prefix twos complement asc nulls last
- fuzz test twos complement asc nulls last with random bitmasks
- fuzz test key prefix twos complement asc nulls last with random bitmasks
- radix support for twos complement desc nulls last
- sort twos complement desc nulls last
- sort key prefix twos complement desc nulls last
- fuzz test twos complement desc nulls last with random bitmasks
- fuzz test key prefix twos complement desc nulls last with random bitmasks
- radix support for twos complement desc nulls first
- sort twos complement desc nulls first
- sort key prefix twos complement desc nulls first
- fuzz test twos complement desc nulls first with random bitmasks
- fuzz test key prefix twos complement desc nulls first with random bitmasks
- radix support for binary data partial
- sort binary data partial
- sort key prefix binary data partial
- fuzz test binary data partial with random bitmasks
- fuzz test key prefix binary data partial with random bitmasks
DiskBlockManagerSuite:
- basic block creation
- enumerating blocks
- SPARK-22227: non-block files are skipped
- temporary shuffle/local file should be able to handle disk failures
WorkerArgumentsTest:
- Memory can't be set to 0 when cmd line args leave off M or G
- Memory can't be set to 0 when SPARK_WORKER_MEMORY env property leaves off M or G
- Memory correctly set when SPARK_WORKER_MEMORY env property appends G
- Memory correctly set from args with M appended to memory value
StatusTrackerSuite:
- basic status API usage
- getJobIdsForGroup()
- getJobIdsForGroup() with takeAsync()
- getJobIdsForGroup() with takeAsync() across multiple partitions
PrimitiveKeyOpenHashMapSuite:
- size for specialized, primitive key, value (int, int)
- initialization
- basic operations
- null values
- changeValue
- inserting in capacity-1 map
- contains
ApplicationCacheSuite:
- Completed UI get
- Test that if an attempt ID is set, it must be used in lookups
- Incomplete apps refreshed
- Large Scale Application Eviction
- Attempts are Evicted
- redirect includes query params
StandaloneDynamicAllocationSuite:
- dynamic allocation default behavior
- dynamic allocation with max cores <= cores per worker
- dynamic allocation with max cores > cores per worker
- dynamic allocation with cores per executor
- dynamic allocation with cores per executor AND max cores
- kill the same executor twice (SPARK-9795)
- the pending replacement executors should not be lost (SPARK-10515)
- disable force kill for busy executors (SPARK-9552)
- initial executor limit
- kill all executors on localhost
- executor registration on a blacklisted host must fail
ResourceUtilsSuite:
- ResourceID
- Resource discoverer no addresses errors
- Resource discoverer amount 0
- Resource discoverer multiple resource types
- get from resources file and discover the remaining
- list resource ids
- parse resource request
- Resource discoverer multiple gpus on driver
- Resource discoverer script returns mismatched name
- Resource discoverer script returns invalid format
- Resource discoverer script doesn't exist
- gpu's specified but not a discovery script
ExternalClusterManagerSuite:
- launch of backend and scheduler
LogUrlsStandaloneSuite:
- verify that correct log urls get propagated from workers
- verify that log urls reflect SPARK_PUBLIC_DNS (SPARK-6175)
AppClientSuite:
- interface methods of AppClient using local Master
- request from AppClient before initialized with master
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@13696072 rejected from java.util.concurrent.ThreadPoolExecutor@566bfc7b[Shutting down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 0]
	at java.base/java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2055)
	at java.base/java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:825)
	at java.base/java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1355)
	at java.base/java.util.concurrent.Executors$DelegatedExecutorService.execute(Executors.java:687)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$4.execute(ExecutionContextImpl.scala:138)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:288)
	at scala.concurrent.Promise.complete(Promise.scala:53)
	at scala.concurrent.Promise.complete$(Promise.scala:52)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:187)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at scala.concurrent.BatchingExecutor$Batch.processBatch$1(BatchingExecutor.scala:67)
	at scala.concurrent.BatchingExecutor$Batch.$anonfun$run$1(BatchingExecutor.scala:82)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:85)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:59)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:875)
	at scala.concurrent.BatchingExecutor.execute(BatchingExecutor.scala:110)
	at scala.concurrent.BatchingExecutor.execute$(BatchingExecutor.scala:107)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:873)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:288)
	at scala.concurrent.Promise.complete(Promise.scala:53)
	at scala.concurrent.Promise.complete$(Promise.scala:52)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:187)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:834)
InternalAccumulatorSuite:
- internal accumulators in TaskContext
- internal accumulators in a stage
- internal accumulators in multiple stages
- internal accumulators in resubmitted stages
- internal accumulators are registered for cleanups
JsonProtocolSuite:
- SparkListenerEvent
- Dependent Classes
- ExceptionFailure backward compatibility: full stack trace
- StageInfo backward compatibility (details, accumulables)
- InputMetrics backward compatibility
- Input/Output records backwards compatibility
- Shuffle Read/Write records backwards compatibility
- OutputMetrics backward compatibility
- BlockManager events backward compatibility
- FetchFailed backwards compatibility
- ShuffleReadMetrics: Local bytes read backwards compatibility
- SparkListenerApplicationStart backwards compatibility
- ExecutorLostFailure backward compatibility
- SparkListenerJobStart backward compatibility
- SparkListenerJobStart and SparkListenerJobEnd backward compatibility
- RDDInfo backward compatibility (scope, parent IDs, callsite)
- StageInfo backward compatibility (parent IDs)
- TaskCommitDenied backward compatibility
- AccumulableInfo backward compatibility
- ExceptionFailure backward compatibility: accumulator updates
- ExecutorMetricsUpdate backward compatibility: executor metrics update
- executorMetricsFromJson backward compatibility: handle missing metrics
- AccumulableInfo value de/serialization
BroadcastSuite:
- Using TorrentBroadcast locally
- Accessing TorrentBroadcast variables from multiple threads
- Accessing TorrentBroadcast variables in a local cluster (encryption = off)
- Accessing TorrentBroadcast variables in a local cluster (encryption = on)
- TorrentBroadcast's blockifyObject and unblockifyObject are inverses
- Test Lazy Broadcast variables with TorrentBroadcast
- Unpersisting TorrentBroadcast on executors only in local mode
- Unpersisting TorrentBroadcast on executors and driver in local mode
- Unpersisting TorrentBroadcast on executors only in distributed mode
- Unpersisting TorrentBroadcast on executors and driver in distributed mode
- Using broadcast after destroy prints callsite
- Broadcast variables cannot be created after SparkContext is stopped (SPARK-5065)
- Forbid broadcasting RDD directly
- Cache broadcast to disk (encryption = off)
- Cache broadcast to disk (encryption = on)
- One broadcast value instance per executor
- One broadcast value instance per executor when memory is constrained
TaskSetBlacklistSuite:
- Blacklisting tasks, executors, and nodes
- multiple attempts for the same task count once
- only blacklist nodes for the task set when all the blacklisted executors are all on same host
SerializerPropertiesSuite:
- JavaSerializer does not support relocation
- KryoSerializer supports relocation when auto-reset is enabled
- KryoSerializer does not support relocation when auto-reset is disabled
EventLoopSuite:
- EventLoop
- EventLoop: start and stop
- EventLoop: onError
- EventLoop: error thrown from onError should not crash the event thread
- EventLoop: calling stop multiple times should only call onStop once
- EventLoop: post event in multiple threads
- EventLoop: onReceive swallows InterruptException
- EventLoop: stop in eventThread
- EventLoop: stop() in onStart should call onStop
- EventLoop: stop() in onReceive should call onStop
- EventLoop: stop() in onError should call onStop
ZippedPartitionsSuite:
- print sizes
DiskStoreSuite:
- reads of memory-mapped and non memory-mapped files are equivalent
- block size tracking
- blocks larger than 2gb
- block data encryption
LiveEntitySuite:
- partition seq
- Only show few elements of CollectionAccumulator when converting to v1.AccumulableInfo
DoubleRDDSuite:
- sum
- WorksOnEmpty
- WorksWithOutOfRangeWithOneBucket
- WorksInRangeWithOneBucket
- WorksInRangeWithOneBucketExactMatch
- WorksWithOutOfRangeWithTwoBuckets
- WorksWithOutOfRangeWithTwoUnEvenBuckets
- WorksInRangeWithTwoBuckets
- WorksInRangeWithTwoBucketsAndNaN
- WorksInRangeWithTwoUnevenBuckets
- WorksMixedRangeWithTwoUnevenBuckets
- WorksMixedRangeWithFourUnevenBuckets
- WorksMixedRangeWithUnevenBucketsAndNaN
- WorksMixedRangeWithUnevenBucketsAndNaNAndNaNRange
- WorksMixedRangeWithUnevenBucketsAndNaNAndNaNRangeAndInfinity
- WorksWithOutOfRangeWithInfiniteBuckets
- ThrowsExceptionOnInvalidBucketArray
- WorksWithoutBucketsBasic
- WorksWithoutBucketsBasicSingleElement
- WorksWithoutBucketsBasicNoRange
- WorksWithoutBucketsBasicTwo
- WorksWithDoubleValuesAtMinMax
- WorksWithoutBucketsWithMoreRequestedThanElements
- WorksWithoutBucketsForLargerDatasets
- WorksWithoutBucketsWithNonIntegralBucketEdges
- WorksWithHugeRange
- ThrowsExceptionOnInvalidRDDs
AppStatusStoreSuite:
- quantile calculation: 1 task
- quantile calculation: few tasks
- quantile calculation: more tasks
- quantile calculation: lots of tasks
- quantile calculation: custom quantiles
- quantile cache
- SPARK-26260: summary should contain only successful tasks' metrics (store = disk)
- SPARK-26260: summary should contain only successful tasks' metrics (store = in memory)
- SPARK-26260: summary should contain only successful tasks' metrics (store = in memory live)
SorterSuite:
- equivalent to Arrays.sort
- KVArraySorter
- SPARK-5984 TimSort bug
- java.lang.ArrayIndexOutOfBoundsException in TimSort
- Sorter benchmark for key-value pairs !!! IGNORED !!!
- Sorter benchmark for primitive int array !!! IGNORED !!!
MedianHeapSuite:
- If no numbers in MedianHeap, NoSuchElementException is thrown.
- Median should be correct when size of MedianHeap is even
- Median should be correct when size of MedianHeap is odd
- Median should be correct though there are duplicated numbers inside.
- Median should be correct when input data is skewed.
PoolSuite:
- FIFO Scheduler Test
- Fair Scheduler Test
- Nested Pool Test
- SPARK-17663: FairSchedulableBuilder sets default values for blank or invalid datas
- FIFO scheduler uses root pool and not spark.scheduler.pool property
- FAIR Scheduler uses default pool when spark.scheduler.pool property is not set
- FAIR Scheduler creates a new pool when spark.scheduler.pool property points to a non-existent pool
- Pool should throw IllegalArgumentException when schedulingMode is not supported
- Fair Scheduler should build fair scheduler when valid spark.scheduler.allocation.file property is set
- Fair Scheduler should use default file(fairscheduler.xml) if it exists in classpath and spark.scheduler.allocation.file property is not set
- Fair Scheduler should throw FileNotFoundException when invalid spark.scheduler.allocation.file property is set
DistributionSuite:
- summary
ContextCleanerSuite:
- cleanup RDD
- cleanup shuffle
- cleanup broadcast
- automatically cleanup RDD
- automatically cleanup shuffle
- automatically cleanup broadcast
- automatically cleanup normal checkpoint
- automatically clean up local checkpoint
- automatically cleanup RDD + shuffle + broadcast
- automatically cleanup RDD + shuffle + broadcast in distributed mode
JsonProtocolSuite:
- writeApplicationInfo
- writeWorkerInfo
- writeApplicationDescription
- writeExecutorRunner
- writeDriverInfo
- writeMasterState
- writeWorkerState
HeartbeatReceiverSuite:
- task scheduler is set correctly
- normal heartbeat
- reregister if scheduler is not ready yet
- reregister if heartbeat from unregistered executor
- reregister if heartbeat from removed executor
- expire dead hosts
- expire dead hosts should kill executors with replacement (SPARK-8119)
AccumulatorSourceSuite:
- that that accumulators register against the metric system's register
- the accumulators value property is checked when the gauge's value is requested
- the double accumulators value property is checked when the gauge's value is requested
ExecutorResourceInfoSuite:
- Track Executor Resource information
- Don't allow acquire address that is not available
- Don't allow acquire address that doesn't exist
- Don't allow release address that is not assigned
- Don't allow release address that doesn't exist
- Ensure that we can acquire the same fractions of a resource from an executor
ReplayListenerSuite:
- Simple replay
- Replay compressed inprogress log file succeeding on partial read
- Replay incompatible event log
- End-to-end replay
- End-to-end replay with compression
UIUtilsSuite:
- makeDescription(plainText = false)
- makeDescription(plainText = true)
- SPARK-11906: Progress bar should not overflow because of speculative tasks
- decodeURLParameter (SPARK-12708: Sorting task error in Stages Page when yarn mode.)
- listingTable with tooltips
- listingTable without tooltips
MutableURLClassLoaderSuite:
- child first
- parent first
- child first can fall back
- child first can fail
- default JDK classloader get resources
- parent first get resources
- child first get resources
- driver sets context class loader in local mode
CheckpointSuite:
- basic checkpointing [reliable checkpoint]
- basic checkpointing [local checkpoint]
- checkpointing partitioners [reliable checkpoint]
- RDDs with one-to-one dependencies [reliable checkpoint]
- RDDs with one-to-one dependencies [local checkpoint]
- ParallelCollectionRDD [reliable checkpoint]
- ParallelCollectionRDD [local checkpoint]
- BlockRDD [reliable checkpoint]
- BlockRDD [local checkpoint]
- ShuffleRDD [reliable checkpoint]
- ShuffleRDD [local checkpoint]
- UnionRDD [reliable checkpoint]
- UnionRDD [local checkpoint]
- CartesianRDD [reliable checkpoint]
- CartesianRDD [local checkpoint]
- CoalescedRDD [reliable checkpoint]
- CoalescedRDD [local checkpoint]
- CoGroupedRDD [reliable checkpoint]
- CoGroupedRDD [local checkpoint]
- ZippedPartitionsRDD [reliable checkpoint]
- ZippedPartitionsRDD [local checkpoint]
- PartitionerAwareUnionRDD [reliable checkpoint]
- PartitionerAwareUnionRDD [local checkpoint]
- CheckpointRDD with zero partitions [reliable checkpoint]
- CheckpointRDD with zero partitions [local checkpoint]
- checkpointAllMarkedAncestors [reliable checkpoint]
- checkpointAllMarkedAncestors [local checkpoint]
AppStatusUtilsSuite:
- schedulerDelay
IndexShuffleBlockResolverSuite:
- commit shuffle files multiple times
TaskResultGetterSuite:
- handling results smaller than max RPC message size
- handling results larger than max RPC message size
- handling total size of results larger than maxResultSize
- task retried if result missing from block manager
- failed task deserialized with the correct classloader (SPARK-11195)
- task result size is set on the driver, not the executors
Exception in thread "task-result-getter-0" java.lang.NoClassDefFoundError
	at org.apache.spark.scheduler.UndeserializableException.readObject(TaskResultGetterSuite.scala:305)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at java.base/java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1160)
	at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2216)
	at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2087)
	at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1594)
	at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:430)
	at org.apache.spark.ThrowableSerializationWrapper.readObject(TaskEndReason.scala:200)
	at jdk.internal.reflect.GeneratedMethodAccessor202.invoke(Unknown Source)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at java.base/java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1160)
	at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2216)
	at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2087)
	at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1594)
	at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2355)
	at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2249)
	at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2087)
	at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1594)
	at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2355)
- failed task is handled when error occurs deserializing the reason
	at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2249)
	at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2087)
	at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1594)
	at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:430)
	at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:76)
	at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:115)
	at org.apache.spark.scheduler.TaskResultGetter.$anonfun$enqueueFailedTask$2(TaskResultGetter.scala:141)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1932)
	at org.apache.spark.scheduler.TaskResultGetter.$anonfun$enqueueFailedTask$1(TaskResultGetter.scala:137)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:834)
TopologyAwareBlockReplicationPolicyBehavior:
- block replication - random block replication policy
- All peers in the same rack
- Peers in 2 racks
PersistenceEngineSuite:
- FileSystemPersistenceEngine
- ZooKeeperPersistenceEngine
MasterSuite:
- can use a custom recovery mode factory
- master correctly recover the application
- master/worker web ui available
1/11/20, 8:58:38 PM ============================================================

-- Gauges ----------------------------------------------------------------------
master.aliveWorkers
- master/worker web ui available with reverseProxy
- basic scheduling - spread out
- basic scheduling - no spread out
- basic scheduling with more memory - spread out
- basic scheduling with more memory - no spread out
- scheduling with max cores - spread out
- scheduling with max cores - no spread out
- scheduling with cores per executor - spread out
- scheduling with cores per executor - no spread out
- scheduling with cores per executor AND max cores - spread out
- scheduling with cores per executor AND max cores - no spread out
- scheduling with executor limit - spread out
- scheduling with executor limit - no spread out
- scheduling with executor limit AND max cores - spread out
- scheduling with executor limit AND max cores - no spread out
- scheduling with executor limit AND cores per executor - spread out
- scheduling with executor limit AND cores per executor - no spread out
- scheduling with executor limit AND cores per executor AND max cores - spread out
- scheduling with executor limit AND cores per executor AND max cores - no spread out
- SPARK-13604: Master should ask Worker kill unknown executors and drivers
- SPARK-20529: Master should reply the address received from worker
- SPARK-27510: Master should avoid dead loop while launching executor failed in Worker
- SPARK-19900: there should be a corresponding driver for the app after relaunching driver
- assign/recycle resources to/from driver
- assign/recycle resources to/from executor
ExternalAppendOnlyMapSuite:
- single insert
- multiple insert
- insert with collision
- ordering
- null keys and values
- simple aggregator
- simple cogroup
- spilling
- spilling with compression
- spilling with compression and encryption
- ExternalAppendOnlyMap shouldn't fail when forced to spill before calling its iterator
- spilling with hash collisions
- spilling with many hash collisions
- spilling with hash collisions using the Int.MaxValue key
- spilling with null keys and values
- SPARK-22713 spill during iteration leaks internal map
- drop all references to the underlying map once the iterator is exhausted
- SPARK-22713 external aggregation updates peak execution memory
- force to spill for external aggregation
AdaptiveSchedulingSuite:
- simple use of submitMapStage
- fetching multiple map output partitions per reduce
- fetching all map output partitions in one reduce
- more reduce tasks than map output partitions
GenericAvroSerializerSuite:
- schema compression and decompression
- record serialization and deserialization
- uses schema fingerprint to decrease message size
- caches previously seen schemas
BlacklistIntegrationSuite:
- If preferred node is bad, without blacklist job will fail
- With default settings, job can succeed despite multiple bad executors on node
- Bad node with multiple executors, job will still succeed with the right confs
- SPARK-15865 Progress with fewer executors than maxTaskFailures
AppStatusListenerSuite:
- environment info
- scheduler events
- storage events
- eviction of old data
- eviction should respect job completion time
- eviction should respect stage completion time
- skipped stages should be evicted before completed stages
- eviction should respect task completion time
- lastStageAttempt should fail when the stage doesn't exist
- SPARK-24415: update metrics for tasks that finish late
- Total tasks in the executor summary should match total stage tasks (live = true)
- Total tasks in the executor summary should match total stage tasks (live = false)
- driver logs
- executor metrics updates
- stage executor metrics
- storage information on executor lost/down
BoundedPriorityQueueSuite:
- BoundedPriorityQueue poll test
ProactiveClosureSerializationSuite:
- throws expected serialization exceptions on actions
- mapPartitions transformations throw proactive serialization exceptions
- map transformations throw proactive serialization exceptions
- filter transformations throw proactive serialization exceptions
- flatMap transformations throw proactive serialization exceptions
- mapPartitionsWithIndex transformations throw proactive serialization exceptions
Run completed in 26 minutes, 39 seconds.
Total number of tests run: 2492
Suites: completed 247, aborted 0
Tests: succeeded 2491, failed 1, canceled 1, ignored 7, pending 0
*** 1 TEST FAILED ***
[INFO] 
[INFO] --------------< org.apache.spark:spark-mllib-local_2.12 >---------------
[INFO] Building Spark Project ML Local Library 3.0.0-SNAPSHOT           [10/31]
[INFO] --------------------------------[ jar ]---------------------------------
[INFO] 
[INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-versions) @ spark-mllib-local_2.12 ---
[INFO] 
[INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-no-duplicate-dependencies) @ spark-mllib-local_2.12 ---
[INFO] 
[INFO] --- scala-maven-plugin:4.3.0:add-source (eclipse-add-source) @ spark-mllib-local_2.12 ---
[INFO] Add Source directory: /home/jenkins/workspace/spark-master-test-maven-hadoop-3.2-jdk-11/mllib-local/src/main/scala
[INFO] Add Test Source directory: /home/jenkins/workspace/spark-master-test-maven-hadoop-3.2-jdk-11/mllib-local/src/test/scala
[INFO] 
[INFO] --- maven-dependency-plugin:3.1.1:build-classpath (default-cli) @ spark-mllib-local_2.12 ---
[INFO] Dependencies classpath:
/home/jenkins/.m2/repository/org/typelevel/spire-macros_2.12/0.17.0-M1/spire-macros_2.12-0.17.0-M1.jar:/home/jenkins/.m2/repository/pl/edu/icm/JLargeArrays/1.5/JLargeArrays-1.5.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-reflect/2.12.10/scala-reflect-2.12.10.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-collection-compat_2.12/2.1.1/scala-collection-compat_2.12-2.1.1.jar:/home/jenkins/.m2/repository/org/spark-project/spark/unused/1.0.0/unused-1.0.0.jar:/home/jenkins/.m2/repository/org/scalanlp/breeze-macros_2.12/1.0/breeze-macros_2.12-1.0.jar:/home/jenkins/.m2/repository/com/github/wendykierp/JTransforms/3.1/JTransforms-3.1.jar:/home/jenkins/.m2/repository/org/typelevel/spire-platform_2.12/0.17.0-M1/spire-platform_2.12-0.17.0-M1.jar:/home/jenkins/.m2/repository/org/typelevel/macro-compat_2.12/1.1.1/macro-compat_2.12-1.1.1.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-math3/3.4.1/commons-math3-3.4.1.jar:/home/jenkins/.m2/repository/net/sourceforge/f2j/arpack_combined_all/0.1/arpack_combined_all-0.1.jar:/home/jenkins/.m2/repository/net/sf/opencsv/opencsv/2.3/opencsv-2.3.jar:/home/jenkins/.m2/repository/com/chuusai/shapeless_2.12/2.3.3/shapeless_2.12-2.3.3.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-library/2.12.10/scala-library-2.12.10.jar:/home/jenkins/.m2/repository/org/typelevel/machinist_2.12/0.6.8/machinist_2.12-0.6.8.jar:/home/jenkins/.m2/repository/org/typelevel/spire-util_2.12/0.17.0-M1/spire-util_2.12-0.17.0-M1.jar:/home/jenkins/.m2/repository/org/scalanlp/breeze_2.12/1.0/breeze_2.12-1.0.jar:/home/jenkins/workspace/spark-master-test-maven-hadoop-3.2-jdk-11/common/tags/target/scala-2.12/classes:/home/jenkins/.m2/repository/org/typelevel/spire_2.12/0.17.0-M1/spire_2.12-0.17.0-M1.jar:/home/jenkins/.m2/repository/org/typelevel/algebra_2.12/2.0.0-M2/algebra_2.12-2.0.0-M2.jar:/home/jenkins/.m2/repository/org/slf4j/slf4j-api/1.7.16/slf4j-api-1.7.16.jar:/home/jenkins/.m2/repository/com/github/fommil/netlib/core/1.1.2/core-1.1.2.jar:/home/jenkins/.m2/repository/org/typelevel/cats-kernel_2.12/2.0.0-M4/cats-kernel_2.12-2.0.0-M4.jar
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (process-resource-bundles) @ spark-mllib-local_2.12 ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ spark-mllib-local_2.12 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/jenkins/workspace/spark-master-test-maven-hadoop-3.2-jdk-11/mllib-local/src/main/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ spark-mllib-local_2.12 ---
[INFO] Not compiling main sources
[INFO] 
[INFO] --- scala-maven-plugin:4.3.0:compile (scala-compile-first) @ spark-mllib-local_2.12 ---
[INFO] Using incremental compilation using Mixed compile order
[INFO] Compiler bridge file: /home/jenkins/.sbt/1.0/zinc/org.scala-sbt/org.scala-sbt-compiler-bridge_2.12-1.3.1-bin_2.12.10__55.0-1.3.1_20191012T045515.jar
[INFO] Compiling 5 Scala sources to /home/jenkins/workspace/spark-master-test-maven-hadoop-3.2-jdk-11/mllib-local/target/scala-2.12/classes ...
[INFO] Done compiling.
[INFO] compile in 14.9 s
[INFO] 
[INFO] --- maven-antrun-plugin:1.8:run (create-tmp-dir) @ spark-mllib-local_2.12 ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ spark-mllib-local_2.12 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/jenkins/workspace/spark-master-test-maven-hadoop-3.2-jdk-11/mllib-local/src/test/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @ spark-mllib-local_2.12 ---
[INFO] Not compiling test sources
[INFO] 
[INFO] --- maven-dependency-plugin:3.1.1:build-classpath (generate-test-classpath) @ spark-mllib-local_2.12 ---
[INFO] Dependencies classpath:
/home/jenkins/.m2/repository/org/scala-lang/scala-reflect/2.12.10/scala-reflect-2.12.10.jar:/home/jenkins/.m2/repository/com/novocode/junit-interface/0.11/junit-interface-0.11.jar:/home/jenkins/.m2/repository/org/scalanlp/breeze-macros_2.12/1.0/breeze-macros_2.12-1.0.jar:/home/jenkins/.m2/repository/com/github/wendykierp/JTransforms/3.1/JTransforms-3.1.jar:/home/jenkins/.m2/repository/net/bytebuddy/byte-buddy-agent/1.9.10/byte-buddy-agent-1.9.10.jar:/home/jenkins/.m2/repository/net/sf/opencsv/opencsv/2.3/opencsv-2.3.jar:/home/jenkins/.m2/repository/org/scalactic/scalactic_2.12/3.0.8/scalactic_2.12-3.0.8.jar:/home/jenkins/.m2/repository/org/scalanlp/breeze_2.12/1.0/breeze_2.12-1.0.jar:/home/jenkins/workspace/spark-master-test-maven-hadoop-3.2-jdk-11/common/tags/target/scala-2.12/classes:/home/jenkins/.m2/repository/org/typelevel/spire_2.12/0.17.0-M1/spire_2.12-0.17.0-M1.jar:/home/jenkins/.m2/repository/org/objenesis/objenesis/2.6/objenesis-2.6.jar:/home/jenkins/.m2/repository/org/typelevel/algebra_2.12/2.0.0-M2/algebra_2.12-2.0.0-M2.jar:/home/jenkins/.m2/repository/org/slf4j/slf4j-api/1.7.16/slf4j-api-1.7.16.jar:/home/jenkins/.m2/repository/org/scalacheck/scalacheck_2.12/1.14.2/scalacheck_2.12-1.14.2.jar:/home/jenkins/.m2/repository/com/github/fommil/netlib/core/1.1.2/core-1.1.2.jar:/home/jenkins/.m2/repository/org/typelevel/spire-macros_2.12/0.17.0-M1/spire-macros_2.12-0.17.0-M1.jar:/home/jenkins/.m2/repository/pl/edu/icm/JLargeArrays/1.5/JLargeArrays-1.5.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-collection-compat_2.12/2.1.1/scala-collection-compat_2.12-2.1.1.jar:/home/jenkins/.m2/repository/org/spark-project/spark/unused/1.0.0/unused-1.0.0.jar:/home/jenkins/.m2/repository/org/scala-sbt/test-interface/1.0/test-interface-1.0.jar:/home/jenkins/.m2/repository/org/hamcrest/hamcrest-core/1.3/hamcrest-core-1.3.jar:/home/jenkins/.m2/repository/org/typelevel/spire-platform_2.12/0.17.0-M1/spire-platform_2.12-0.17.0-M1.jar:/home/jenkins/.m2/repository/org/typelevel/macro-compat_2.12/1.1.1/macro-compat_2.12-1.1.1.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-math3/3.4.1/commons-math3-3.4.1.jar:/home/jenkins/.m2/repository/net/sourceforge/f2j/arpack_combined_all/0.1/arpack_combined_all-0.1.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest_2.12/3.0.8/scalatest_2.12-3.0.8.jar:/home/jenkins/.m2/repository/com/chuusai/shapeless_2.12/2.3.3/shapeless_2.12-2.3.3.jar:/home/jenkins/.m2/repository/junit/junit/4.12/junit-4.12.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-xml_2.12/1.2.0/scala-xml_2.12-1.2.0.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-library/2.12.10/scala-library-2.12.10.jar:/home/jenkins/.m2/repository/org/typelevel/machinist_2.12/0.6.8/machinist_2.12-0.6.8.jar:/home/jenkins/.m2/repository/org/typelevel/spire-util_2.12/0.17.0-M1/spire-util_2.12-0.17.0-M1.jar:/home/jenkins/.m2/repository/org/mockito/mockito-core/3.1.0/mockito-core-3.1.0.jar:/home/jenkins/.m2/repository/net/bytebuddy/byte-buddy/1.9.10/byte-buddy-1.9.10.jar:/home/jenkins/workspace/spark-master-test-maven-hadoop-3.2-jdk-11/common/tags/target/scala-2.12/test-classes:/home/jenkins/.m2/repository/org/typelevel/cats-kernel_2.12/2.0.0-M4/cats-kernel_2.12-2.0.0-M4.jar
[INFO] 
[INFO] --- scala-maven-plugin:4.3.0:testCompile (scala-test-compile-first) @ spark-mllib-local_2.12 ---
[INFO] Using incremental compilation using Mixed compile order
[INFO] Compiler bridge file: /home/jenkins/.sbt/1.0/zinc/org.scala-sbt/org.scala-sbt-compiler-bridge_2.12-1.3.1-bin_2.12.10__55.0-1.3.1_20191012T045515.jar
[INFO] Compiling 10 Scala sources to /home/jenkins/workspace/spark-master-test-maven-hadoop-3.2-jdk-11/mllib-local/target/scala-2.12/test-classes ...
[INFO] Done compiling.
[INFO] compile in 21.2 s
[INFO] 
[INFO] --- maven-surefire-plugin:3.0.0-M3:test (default-test) @ spark-mllib-local_2.12 ---
[INFO] 
[INFO] -------------------------------------------------------
[INFO]  T E S T S
[INFO] -------------------------------------------------------
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
[INFO] 
[INFO] 
[INFO] --- maven-surefire-plugin:3.0.0-M3:test (test) @ spark-mllib-local_2.12 ---
[INFO] Skipping execution of surefire because it has already been run for this configuration
[INFO] 
[INFO] --- scalatest-maven-plugin:2.0.0:test (test) @ spark-mllib-local_2.12 ---
Discovery starting.
Discovery completed in 305 milliseconds.
Run starting. Expected test count is: 94
BLASSuite:
- copy
Jan 11, 2020 9:00:24 PM com.github.fommil.netlib.BLAS <clinit>
WARNING: Failed to load implementation from: com.github.fommil.netlib.NativeSystemBLAS
Jan 11, 2020 9:00:25 PM com.github.fommil.netlib.BLAS <clinit>
WARNING: Failed to load implementation from: com.github.fommil.netlib.NativeRefBLAS
- scal
- axpy
- dot
- spr
- syr
- gemm
- gemv
- spmv
UtilsSuite:
- EPSILON
TestingUtilsSuite:
- Comparing doubles using relative error.
- Comparing doubles using absolute error.
- Comparing vectors using relative error.
- Comparing vectors using absolute error.
- Comparing Matrices using absolute error.
- Comparing Matrices using relative error.
BreezeMatrixConversionSuite:
- dense matrix to breeze
- dense breeze matrix to matrix
- sparse matrix to breeze
- sparse breeze matrix to sparse matrix
BreezeVectorConversionSuite:
Jan 11, 2020 9:00:26 PM com.github.fommil.netlib.LAPACK <clinit>
WARNING: Failed to load implementation from: com.github.fommil.netlib.NativeSystemLAPACK
Jan 11, 2020 9:00:26 PM com.github.fommil.netlib.LAPACK <clinit>
WARNING: Failed to load implementation from: com.github.fommil.netlib.NativeRefLAPACK
- dense to breeze
- sparse to breeze
- dense breeze to vector
- sparse breeze to vector
- sparse breeze with partially-used arrays to vector
MultivariateGaussianSuite:
- univariate
- multivariate
- multivariate degenerate
- SPARK-11302
MatricesSuite:
- dense matrix construction
- dense matrix construction with wrong dimension
- sparse matrix construction
- sparse matrix construction with wrong number of elements
- index in matrices incorrect input
- equals
- matrix copies are deep copies
- matrix indexing and updating
- dense to dense
- dense to sparse
- sparse to sparse
- sparse to dense
- compressed dense
- compressed sparse
- map, update
- transpose
- foreachActive
- horzcat, vertcat, eye, speye
- zeros
- ones
- eye
- rand
- randn
- diag
- sprand
- sprandn
- toString
- numNonzeros and numActives
- fromBreeze with sparse matrix
- row/col iterator
VectorsSuite:
- dense vector construction with varargs
- dense vector construction from a double array
- sparse vector construction
- sparse vector construction with unordered elements
- sparse vector construction with mismatched indices/values array
- sparse vector construction with too many indices vs size
- sparse vector construction with negative indices
- dense to array
- dense argmax
- sparse to array
- sparse argmax
- vector equals
- vectors equals with explicit 0
- indexing dense vectors
- indexing sparse vectors
- zeros
- Vector.copy
- fromBreeze
- sqdist
- foreach
- foreachActive
- foreachNonZero
- vector p-norm
- Vector numActive and numNonzeros
- Vector toSparse and toDense
- Vector.compressed
- SparseVector.slice
- sparse vector only support non-negative length
- dot product only supports vectors of same size
- dense vector dot product
- sparse vector dot product
- mixed sparse and dense vector dot product
- iterator
- activeIterator
- nonZeroIterator
Run completed in 2 seconds, 844 milliseconds.
Total number of tests run: 94
Suites: completed 9, aborted 0
Tests: succeeded 94, failed 0, canceled 0, ignored 0, pending 0
All tests passed.
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project GraphX
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project Streaming
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project Catalyst
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project SQL
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project ML Library
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] -----------------< org.apache.spark:spark-tools_2.12 >------------------
[INFO] Building Spark Project Tools 3.0.0-SNAPSHOT                      [11/31]
[INFO] --------------------------------[ jar ]---------------------------------
[INFO] 
[INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-versions) @ spark-tools_2.12 ---
[INFO] 
[INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-no-duplicate-dependencies) @ spark-tools_2.12 ---
[INFO] 
[INFO] --- scala-maven-plugin:4.3.0:add-source (eclipse-add-source) @ spark-tools_2.12 ---
[INFO] Add Source directory: /home/jenkins/workspace/spark-master-test-maven-hadoop-3.2-jdk-11/tools/src/main/scala
[INFO] Add Test Source directory: /home/jenkins/workspace/spark-master-test-maven-hadoop-3.2-jdk-11/tools/src/test/scala
[INFO] 
[INFO] --- maven-dependency-plugin:3.1.1:build-classpath (default-cli) @ spark-tools_2.12 ---
[INFO] Dependencies classpath:
/home/jenkins/.m2/repository/org/scala-lang/scala-reflect/2.12.10/scala-reflect-2.12.10.jar:/home/jenkins/.m2/repository/org/spark-project/spark/unused/1.0.0/unused-1.0.0.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm/7.1/asm-7.1.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm-tree/7.1/asm-tree-7.1.jar:/home/jenkins/.m2/repository/org/clapper/grizzled-scala_2.12/4.9.3/grizzled-scala_2.12-4.9.3.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-compiler/2.12.10/scala-compiler-2.12.10.jar:/home/jenkins/.m2/repository/org/clapper/classutil_2.12/1.5.1/classutil_2.12-1.5.1.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-xml_2.12/1.2.0/scala-xml_2.12-1.2.0.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-library/2.12.10/scala-library-2.12.10.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm-analysis/7.1/asm-analysis-7.1.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm-commons/7.1/asm-commons-7.1.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-collection-compat_2.12/2.0.0/scala-collection-compat_2.12-2.0.0.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm-util/7.1/asm-util-7.1.jar
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (process-resource-bundles) @ spark-tools_2.12 ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ spark-tools_2.12 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/jenkins/workspace/spark-master-test-maven-hadoop-3.2-jdk-11/tools/src/main/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ spark-tools_2.12 ---
[INFO] Not compiling main sources
[INFO] 
[INFO] --- scala-maven-plugin:4.3.0:compile (scala-compile-first) @ spark-tools_2.12 ---
[INFO] Using incremental compilation using Mixed compile order
[INFO] Compiler bridge file: /home/jenkins/.sbt/1.0/zinc/org.scala-sbt/org.scala-sbt-compiler-bridge_2.12-1.3.1-bin_2.12.10__55.0-1.3.1_20191012T045515.jar
[INFO] Compiling 1 Scala source to /home/jenkins/workspace/spark-master-test-maven-hadoop-3.2-jdk-11/tools/target/scala-2.12/classes ...
[INFO] Done compiling.
[INFO] compile in 5.4 s
[INFO] 
[INFO] --- maven-antrun-plugin:1.8:run (create-tmp-dir) @ spark-tools_2.12 ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ spark-tools_2.12 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/jenkins/workspace/spark-master-test-maven-hadoop-3.2-jdk-11/tools/src/test/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @ spark-tools_2.12 ---
[INFO] Not compiling test sources
[INFO] 
[INFO] --- maven-dependency-plugin:3.1.1:build-classpath (generate-test-classpath) @ spark-tools_2.12 ---
[INFO] Dependencies classpath:
/home/jenkins/.m2/repository/org/scala-lang/scala-reflect/2.12.10/scala-reflect-2.12.10.jar:/home/jenkins/.m2/repository/org/spark-project/spark/unused/1.0.0/unused-1.0.0.jar:/home/jenkins/.m2/repository/org/scala-sbt/test-interface/1.0/test-interface-1.0.jar:/home/jenkins/.m2/repository/com/novocode/junit-interface/0.11/junit-interface-0.11.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm/7.1/asm-7.1.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm-tree/7.1/asm-tree-7.1.jar:/home/jenkins/.m2/repository/org/clapper/grizzled-scala_2.12/4.9.3/grizzled-scala_2.12-4.9.3.jar:/home/jenkins/.m2/repository/org/hamcrest/hamcrest-core/1.3/hamcrest-core-1.3.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-compiler/2.12.10/scala-compiler-2.12.10.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest_2.12/3.0.8/scalatest_2.12-3.0.8.jar:/home/jenkins/.m2/repository/org/clapper/classutil_2.12/1.5.1/classutil_2.12-1.5.1.jar:/home/jenkins/.m2/repository/junit/junit/4.12/junit-4.12.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-xml_2.12/1.2.0/scala-xml_2.12-1.2.0.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-library/2.12.10/scala-library-2.12.10.jar:/home/jenkins/.m2/repository/org/scalactic/scalactic_2.12/3.0.8/scalactic_2.12-3.0.8.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm-analysis/7.1/asm-analysis-7.1.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm-commons/7.1/asm-commons-7.1.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-collection-compat_2.12/2.0.0/scala-collection-compat_2.12-2.0.0.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm-util/7.1/asm-util-7.1.jar
[INFO] 
[INFO] --- scala-maven-plugin:4.3.0:testCompile (scala-test-compile-first) @ spark-tools_2.12 ---
[INFO] compile in 0.0 s
[INFO] No sources to compile
[INFO] 
[INFO] --- maven-surefire-plugin:3.0.0-M3:test (default-test) @ spark-tools_2.12 ---
[INFO] 
[INFO] --- maven-surefire-plugin:3.0.0-M3:test (test) @ spark-tools_2.12 ---
[INFO] Skipping execution of surefire because it has already been run for this configuration
[INFO] 
[INFO] --- scalatest-maven-plugin:2.0.0:test (test) @ spark-tools_2.12 ---
Discovery starting.
Discovery completed in 676 milliseconds.
Run starting. Expected test count is: 0
DiscoverySuite:
Run completed in 756 milliseconds.
Total number of tests run: 0
Suites: completed 1, aborted 0
Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
No tests were executed.
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project Hive
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project REPL
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --------------< org.apache.spark:spark-network-yarn_2.12 >--------------
[INFO] Building Spark Project YARN Shuffle Service 3.0.0-SNAPSHOT       [12/31]
[INFO] --------------------------------[ jar ]---------------------------------
[INFO] 
[INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-versions) @ spark-network-yarn_2.12 ---
[INFO] 
[INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-no-duplicate-dependencies) @ spark-network-yarn_2.12 ---
[INFO] 
[INFO] --- build-helper-maven-plugin:3.0.0:regex-property (regex-property) @ spark-network-yarn_2.12 ---
[INFO] 
[INFO] --- scala-maven-plugin:4.3.0:add-source (eclipse-add-source) @ spark-network-yarn_2.12 ---
[INFO] Add Source directory: /home/jenkins/workspace/spark-master-test-maven-hadoop-3.2-jdk-11/common/network-yarn/src/main/scala
[INFO] Add Test Source directory: /home/jenkins/workspace/spark-master-test-maven-hadoop-3.2-jdk-11/common/network-yarn/src/test/scala
[INFO] 
[INFO] --- maven-dependency-plugin:3.1.1:build-classpath (default-cli) @ spark-network-yarn_2.12 ---
[INFO] Dependencies classpath:
/home/jenkins/.m2/repository/org/apache/commons/commons-crypto/1.0.0/commons-crypto-1.0.0.jar:/home/jenkins/.m2/repository/org/spark-project/spark/unused/1.0.0/unused-1.0.0.jar:/home/jenkins/workspace/spark-master-test-maven-hadoop-3.2-jdk-11/common/network-shuffle/target/scala-2.12/classes:/home/jenkins/.m2/repository/org/apache/commons/commons-lang3/3.9/commons-lang3-3.9.jar:/home/jenkins/workspace/spark-master-test-maven-hadoop-3.2-jdk-11/common/network-common/target/scala-2.12/classes:/home/jenkins/.m2/repository/io/dropwizard/metrics/metrics-core/4.1.1/metrics-core-4.1.1.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-library/2.12.10/scala-library-2.12.10.jar:/home/jenkins/.m2/repository/com/thoughtworks/paranamer/paranamer/2.8/paranamer-2.8.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.10.0/jackson-annotations-2.10.0.jar:/home/jenkins/.m2/repository/io/netty/netty-all/4.1.42.Final/netty-all-4.1.42.Final.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.0/jackson-databind-2.10.0.jar:/home/jenkins/.m2/repository/com/google/code/findbugs/jsr305/3.0.0/jsr305-3.0.0.jar:/home/jenkins/.m2/repository/org/fusesource/leveldbjni/leveldbjni-all/1.8/leveldbjni-all-1.8.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.10.0/jackson-core-2.10.0.jar
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (process-resource-bundles) @ spark-network-yarn_2.12 ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ spark-network-yarn_2.12 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/jenkins/workspace/spark-master-test-maven-hadoop-3.2-jdk-11/common/network-yarn/src/main/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ spark-network-yarn_2.12 ---
[INFO] Not compiling main sources
[INFO] 
[INFO] --- scala-maven-plugin:4.3.0:compile (scala-compile-first) @ spark-network-yarn_2.12 ---
[INFO] Using incremental compilation using Mixed compile order
[INFO] Compiler bridge file: /home/jenkins/.sbt/1.0/zinc/org.scala-sbt/org.scala-sbt-compiler-bridge_2.12-1.3.1-bin_2.12.10__55.0-1.3.1_20191012T045515.jar
[INFO] Compiling 3 Java sources to /home/jenkins/workspace/spark-master-test-maven-hadoop-3.2-jdk-11/common/network-yarn/target/scala-2.12/classes ...
[INFO] Done compiling.
[INFO] compile in 0.5 s
[INFO] 
[INFO] --- maven-antrun-plugin:1.8:run (create-tmp-dir) @ spark-network-yarn_2.12 ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ spark-network-yarn_2.12 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/jenkins/workspace/spark-master-test-maven-hadoop-3.2-jdk-11/common/network-yarn/src/test/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @ spark-network-yarn_2.12 ---
[INFO] Not compiling test sources
[INFO] 
[INFO] --- maven-dependency-plugin:3.1.1:build-classpath (generate-test-classpath) @ spark-network-yarn_2.12 ---
[INFO] Dependencies classpath:
/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-servlet/9.4.18.v20190429/jetty-servlet-9.4.18.v20190429.jar:/home/jenkins/.m2/repository/dnsjava/dnsjava/2.1.7/dnsjava-2.1.7.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-reflect/2.12.10/scala-reflect-2.12.10.jar:/home/jenkins/.m2/repository/org/apache/kerby/kerb-simplekdc/1.0.1/kerb-simplekdc-1.0.1.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-configuration2/2.1.1/commons-configuration2-2.1.1.jar:/home/jenkins/.m2/repository/net/minidev/json-smart/2.3/json-smart-2.3.jar:/home/jenkins/.m2/repository/org/apache/kerby/token-provider/1.0.1/token-provider-1.0.1.jar:/home/jenkins/.m2/repository/log4j/log4j/1.2.17/log4j-1.2.17.jar:/home/jenkins/.m2/repository/com/novocode/junit-interface/0.11/junit-interface-0.11.jar:/home/jenkins/workspace/spark-master-test-maven-hadoop-3.2-jdk-11/common/network-shuffle/target/scala-2.12/classes:/home/jenkins/.m2/repository/org/apache/curator/curator-framework/2.13.0/curator-framework-2.13.0.jar:/home/jenkins/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar:/home/jenkins/.m2/repository/io/dropwizard/metrics/metrics-core/4.1.1/metrics-core-4.1.1.jar:/home/jenkins/.m2/repository/com/squareup/okio/okio/1.6.0/okio-1.6.0.jar:/home/jenkins/.m2/repository/com/thoughtworks/paranamer/paranamer/2.8/paranamer-2.8.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.10.0/jackson-annotations-2.10.0.jar:/home/jenkins/.m2/repository/org/scalactic/scalactic_2.12/3.0.8/scalactic_2.12-3.0.8.jar:/home/jenkins/.m2/repository/org/apache/kerby/kerb-core/1.0.1/kerb-core-1.0.1.jar:/home/jenkins/.m2/repository/org/apache/avro/avro/1.8.2/avro-1.8.2.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-compress/1.4.1/commons-compress-1.4.1.jar:/home/jenkins/.m2/repository/net/minidev/accessors-smart/1.2/accessors-smart-1.2.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/jaxrs/jackson-jaxrs-json-provider/2.9.5/jackson-jaxrs-json-provider-2.9.5.jar:/home/jenkins/.m2/repository/com/google/re2j/re2j/1.1/re2j-1.1.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/3.2.0/hadoop-mapreduce-client-core-3.2.0.jar:/home/jenkins/.m2/repository/com/google/code/findbugs/jsr305/3.0.0/jsr305-3.0.0.jar:/home/jenkins/.m2/repository/org/xerial/snappy/snappy-java/1.1.7.3/snappy-java-1.1.7.3.jar:/home/jenkins/.m2/repository/org/fusesource/leveldbjni/leveldbjni-all/1.8/leveldbjni-all-1.8.jar:/home/jenkins/.m2/repository/org/slf4j/slf4j-api/1.7.16/slf4j-api-1.7.16.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-text/1.6/commons-text-1.6.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.10.0/jackson-core-2.10.0.jar:/home/jenkins/.m2/repository/org/tukaani/xz/1.5/xz-1.5.jar:/home/jenkins/.m2/repository/org/apache/htrace/htrace-core4/4.1.0-incubating/htrace-core4-4.1.0-incubating.jar:/home/jenkins/.m2/repository/org/apache/kerby/kerby-util/1.0.1/kerby-util-1.0.1.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-crypto/1.0.0/commons-crypto-1.0.0.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-annotations/3.2.0/hadoop-annotations-3.2.0.jar:/home/jenkins/.m2/repository/org/apache/kerby/kerb-util/1.0.1/kerb-util-1.0.1.jar:/home/jenkins/.m2/repository/org/scala-sbt/test-interface/1.0/test-interface-1.0.jar:/home/jenkins/.m2/repository/commons-io/commons-io/2.4/commons-io-2.4.jar:/home/jenkins/.m2/repository/org/hamcrest/hamcrest-core/1.3/hamcrest-core-1.3.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/module/jackson-module-jaxb-annotations/2.10.0/jackson-module-jaxb-annotations-2.10.0.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-jobclient/3.2.0/hadoop-mapreduce-client-jobclient-3.2.0.jar:/home/jenkins/.m2/repository/javax/xml/bind/jaxb-api/2.2.11/jaxb-api-2.2.11.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest_2.12/3.0.8/scalatest_2.12-3.0.8.jar:/home/jenkins/workspace/spark-master-test-maven-hadoop-3.2-jdk-11/common/network-common/target/scala-2.12/classes:/home/jenkins/.m2/repository/javax/servlet/jsp/jsp-api/2.1/jsp-api-2.1.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-xml_2.12/1.2.0/scala-xml_2.12-1.2.0.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-library/2.12.10/scala-library-2.12.10.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-util/9.4.18.v20190429/jetty-util-9.4.18.v20190429.jar:/home/jenkins/.m2/repository/jakarta/activation/jakarta.activation-api/1.2.1/jakarta.activation-api-1.2.1.jar:/home/jenkins/.m2/repository/org/apache/curator/curator-recipes/2.13.0/curator-recipes-2.13.0.jar:/home/jenkins/.m2/repository/org/apache/kerby/kerby-xdr/1.0.1/kerby-xdr-1.0.1.jar:/home/jenkins/.m2/repository/org/codehaus/jackson/jackson-core-asl/1.9.13/jackson-core-asl-1.9.13.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.10.0/jackson-databind-2.10.0.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-common/3.2.0/hadoop-mapreduce-client-common-3.2.0.jar:/home/jenkins/.m2/repository/com/google/code/gson/gson/2.2.4/gson-2.2.4.jar:/home/jenkins/.m2/repository/org/apache/curator/curator-client/2.13.0/curator-client-2.13.0.jar:/home/jenkins/.m2/repository/com/github/stephenc/jcip/jcip-annotations/1.0-1/jcip-annotations-1.0-1.jar:/home/jenkins/.m2/repository/org/apache/kerby/kerb-admin/1.0.1/kerb-admin-1.0.1.jar:/home/jenkins/.m2/repository/commons-net/commons-net/3.1/commons-net-3.1.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-lang3/3.9/commons-lang3-3.9.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-hdfs-client/3.2.0/hadoop-hdfs-client-3.2.0.jar:/home/jenkins/.m2/repository/org/apache/kerby/kerby-config/1.0.1/kerby-config-1.0.1.jar:/home/jenkins/workspace/spark-master-test-maven-hadoop-3.2-jdk-11/common/tags/target/scala-2.12/classes:/home/jenkins/.m2/repository/org/apache/kerby/kerby-pkix/1.0.1/kerby-pkix-1.0.1.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-yarn-api/3.2.0/hadoop-yarn-api-3.2.0.jar:/home/jenkins/.m2/repository/com/nimbusds/nimbus-jose-jwt/4.41.1/nimbus-jose-jwt-4.41.1.jar:/home/jenkins/.m2/repository/commons-cli/commons-cli/1.2/commons-cli-1.2.jar:/home/jenkins/.m2/repository/jakarta/xml/bind/jakarta.xml.bind-api/2.3.2/jakarta.xml.bind-api-2.3.2.jar:/home/jenkins/.m2/repository/commons-codec/commons-codec/1.10/commons-codec-1.10.jar:/home/jenkins/.m2/repository/com/google/guava/guava/14.0.1/guava-14.0.1.jar:/home/jenkins/.m2/repository/org/apache/httpcomponents/httpcore/4.4.12/httpcore-4.4.12.jar:/home/jenkins/.m2/repository/com/fasterxml/woodstox/woodstox-core/5.0.3/woodstox-core-5.0.3.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-yarn-common/3.2.0/hadoop-yarn-common-3.2.0.jar:/home/jenkins/.m2/repository/commons-beanutils/commons-beanutils/1.9.4/commons-beanutils-1.9.4.jar:/home/jenkins/.m2/repository/org/apache/kerby/kerby-asn1/1.0.1/kerby-asn1-1.0.1.jar:/home/jenkins/.m2/repository/org/spark-project/spark/unused/1.0.0/unused-1.0.0.jar:/home/jenkins/.m2/repository/org/apache/kerby/kerb-server/1.0.1/kerb-server-1.0.1.jar:/home/jenkins/.m2/repository/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-math3/3.4.1/commons-math3-3.4.1.jar:/home/jenkins/.m2/repository/org/codehaus/woodstox/stax2-api/3.1.4/stax2-api-3.1.4.jar:/home/jenkins/.m2/repository/commons-collections/commons-collections/3.2.2/commons-collections-3.2.2.jar:/home/jenkins/.m2/repository/junit/junit/4.12/junit-4.12.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-security/9.4.18.v20190429/jetty-security-9.4.18.v20190429.jar:/home/jenkins/.m2/repository/org/apache/kerby/kerb-client/1.0.1/kerb-client-1.0.1.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-yarn-client/3.2.0/hadoop-yarn-client-3.2.0.jar:/home/jenkins/.m2/repository/com/squareup/okhttp/okhttp/2.7.5/okhttp-2.7.5.jar:/home/jenkins/.m2/repository/org/apache/httpcomponents/httpclient/4.5.6/httpclient-4.5.6.jar:/home/jenkins/.m2/repository/org/apache/kerby/kerb-identity/1.0.1/kerb-identity-1.0.1.jar:/home/jenkins/.m2/repository/io/netty/netty-all/4.1.42.Final/netty-all-4.1.42.Final.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-common/3.2.0/hadoop-common-3.2.0.jar:/home/jenkins/workspace/spark-master-test-maven-hadoop-3.2-jdk-11/common/tags/target/scala-2.12/test-classes:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-auth/3.2.0/hadoop-auth-3.2.0.jar:/home/jenkins/.m2/repository/org/apache/kerby/kerb-common/1.0.1/kerb-common-1.0.1.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-client/3.2.0/hadoop-client-3.2.0.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/jaxrs/jackson-jaxrs-base/2.9.5/jackson-jaxrs-base-2.9.5.jar:/home/jenkins/.m2/repository/org/apache/kerby/kerb-crypto/1.0.1/kerb-crypto-1.0.1.jar
[INFO] 
[INFO] --- scala-maven-plugin:4.3.0:testCompile (scala-test-compile-first) @ spark-network-yarn_2.12 ---
[INFO] compile in 0.0 s
[INFO] No sources to compile
[INFO] 
[INFO] --- maven-surefire-plugin:3.0.0-M3:test (default-test) @ spark-network-yarn_2.12 ---
[INFO] 
[INFO] --- maven-surefire-plugin:3.0.0-M3:test (test) @ spark-network-yarn_2.12 ---
[INFO] Skipping execution of surefire because it has already been run for this configuration
[INFO] 
[INFO] --- scalatest-maven-plugin:2.0.0:test (test) @ spark-network-yarn_2.12 ---
Discovery starting.
Discovery completed in 102 milliseconds.
Run starting. Expected test count is: 0
DiscoverySuite:
Run completed in 155 milliseconds.
Total number of tests run: 0
Suites: completed 1, aborted 0
Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
No tests were executed.
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project YARN
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project Mesos
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project Hive Thrift Server
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project Assembly
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Kafka 0.10+ Token Provider for Streaming
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Integration for Kafka 0.10
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Kafka 0.10+ Source for Structured Streaming
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Kinesis Integration
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project Examples
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Integration for Kafka 0.10 Assembly
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Avro
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project Kinesis Assembly
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for Spark Project Parent POM 3.0.0-SNAPSHOT:
[INFO] 
[INFO] Spark Project Parent POM ........................... SUCCESS [  2.816 s]
[INFO] Spark Project Tags ................................. SUCCESS [  8.845 s]
[INFO] Spark Project Sketch ............................... SUCCESS [ 24.181 s]
[INFO] Spark Project Local DB ............................. SUCCESS [  4.767 s]
[INFO] Spark Project Networking ........................... SUCCESS [ 54.580 s]
[INFO] Spark Project Shuffle Streaming Service ............ SUCCESS [ 10.416 s]
[INFO] Spark Project Unsafe ............................... SUCCESS [ 10.897 s]
[INFO] Spark Project Launcher ............................. SUCCESS [  4.904 s]
[INFO] Spark Project Core ................................. FAILURE [29:29 min]
[INFO] Spark Project ML Local Library ..................... SUCCESS [ 42.023 s]
[INFO] Spark Project GraphX ............................... SKIPPED
[INFO] Spark Project Streaming ............................ SKIPPED
[INFO] Spark Project Catalyst ............................. SKIPPED
[INFO] Spark Project SQL .................................. SKIPPED
[INFO] Spark Project ML Library ........................... SKIPPED
[INFO] Spark Project Tools ................................ SUCCESS [  7.931 s]
[INFO] Spark Project Hive ................................. SKIPPED
[INFO] Spark Project REPL ................................. SKIPPED
[INFO] Spark Project YARN Shuffle Service ................. SUCCESS [  3.891 s]
[INFO] Spark Project YARN ................................. SKIPPED
[INFO] Spark Project Mesos ................................ SKIPPED
[INFO] Spark Project Hive Thrift Server ................... SKIPPED
[INFO] Spark Project Assembly ............................. SKIPPED
[INFO] Kafka 0.10+ Token Provider for Streaming ........... SKIPPED
[INFO] Spark Integration for Kafka 0.10 ................... SKIPPED
[INFO] Kafka 0.10+ Source for Structured Streaming ........ SKIPPED
[INFO] Spark Kinesis Integration .......................... SKIPPED
[INFO] Spark Project Examples ............................. SKIPPED
[INFO] Spark Integration for Kafka 0.10 Assembly .......... SKIPPED
[INFO] Spark Avro ......................................... SKIPPED
[INFO] Spark Project Kinesis Assembly ..................... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  32:26 min
[INFO] Finished at: 2020-01-11T21:00:39-08:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.scalatest:scalatest-maven-plugin:2.0.0:test (test) on project spark-core_2.12: There are test failures -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <args> -rf :spark-core_2.12
+ retcode2=1
+ [[ 0 -ne 0 ]]
+ [[ 1 -ne 0 ]]
+ [[ 0 -ne 0 ]]
+ [[ 1 -ne 0 ]]
+ echo 'Testing Spark with Maven failed'
Testing Spark with Maven failed
+ exit 1
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Finished: FAILURE