FailedConsole Output

Skipping 4,467 KB.. Full Log
9nMS8dP3gkqLMvHTriiIGKaihyfl5xfk5qXrOEBpkDgMEMDIxMFQUlDDI2RQXJOYpFJdU5qTaKoEttlJQNjBwdjEwsFayAwAsE8VZpQAAAA==- Process succeeds instantly (81 milliseconds)
[info] - Process failing several times and then succeeding (71 milliseconds)
[info] - Process doesn't restart if not supervised (91 milliseconds)
[info] - Process doesn't restart if killed (78 milliseconds)
[info] - Reset of backoff counter (73 milliseconds)
[info] - Kill process finalized with state KILLED (74 milliseconds)
[info] - Finalized with state FINISHED (75 milliseconds)
[info] - Finalized with state FAILED (87 milliseconds)
[info] - Handle exception starting process (99 milliseconds)
[info] ExternalAppendOnlyMapSuite:
[info] - single insert (280 milliseconds)
[info] - multiple insert (169 milliseconds)
[info] - insert with collision (137 milliseconds)
[info] - ordering (327 milliseconds)
[info] - null keys and values (242 milliseconds)
[info] - simple aggregator (298 milliseconds)
[info] - simple cogroup (182 milliseconds)
[info] - caching in memory and disk, serialized, replicated (encryption = on) (7 seconds, 848 milliseconds)
[info] - Star PageRank (3 seconds, 142 milliseconds)
[info] - Star PersonalPageRank (4 seconds, 890 milliseconds)
[info] - caching in memory and disk, serialized, replicated (encryption = on) (with replication as stream) (5 seconds, 427 milliseconds)
[info] - spilling (6 seconds, 297 milliseconds)
[info] - run Spark in yarn-client mode with different configurations, ensuring redaction (32 seconds, 35 milliseconds)
[info] - compute without caching when no partitions fit in memory (4 seconds, 542 milliseconds)
[info] - compute when only some partitions fit in memory (5 seconds, 401 milliseconds)
[info] - Grid PageRank (12 seconds, 727 milliseconds)
[info] - passing environment variables to cluster (3 seconds, 538 milliseconds)
[info] - Chain PageRank (4 seconds, 166 milliseconds)
[info] - Chain PersonalizedPageRank (4 seconds, 960 milliseconds)
[info] - spilling with compression (25 seconds, 895 milliseconds)
[info] - recover from node failures (13 seconds, 804 milliseconds)
[info] - run Spark in yarn-cluster mode with different configurations, ensuring redaction (28 seconds, 35 milliseconds)
[info] - spilling with compression and encryption (5 seconds, 657 milliseconds)
[info] - ExternalAppendOnlyMap shouldn't fail when forced to spill before calling its iterator (318 milliseconds)
[info] - spilling with hash collisions (375 milliseconds)
[info] - spilling with many hash collisions (1 second, 36 milliseconds)
[info] - spilling with hash collisions using the Int.MaxValue key (310 milliseconds)
[info] - spilling with null keys and values (465 milliseconds)
[info] - SPARK-22713 spill during iteration leaks internal map (860 milliseconds)
[info] - drop all references to the underlying map once the iterator is exhausted (1 second, 149 milliseconds)
[info] - SPARK-22713 external aggregation updates peak execution memory (560 milliseconds)
[info] - recover from repeated node failures during shuffle-map (9 seconds, 952 milliseconds)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@3b56eeb8 rejected from java.util.concurrent.ThreadPoolExecutor@2f2482f3[Shutting down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 0]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at java.util.concurrent.Executors$DelegatedExecutorService.execute(Executors.java:668)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.Promise$class.failure(Promise.scala:104)
	at scala.concurrent.impl.Promise$DefaultPromise.failure(Promise.scala:157)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:194)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:192)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
	at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
[info] - Loop with source PageRank (20 seconds, 374 milliseconds)
[info] - force to spill for external aggregation (17 seconds, 997 milliseconds)
[info] BlockInfoManagerSuite:
[info] - initial memory usage (1 millisecond)
[info] - get non-existent block (0 milliseconds)
[info] - basic lockNewBlockForWriting (3 milliseconds)
[info] - lockNewBlockForWriting blocks while write lock is held, then returns false after release (304 milliseconds)
[info] - lockNewBlockForWriting blocks while write lock is held, then returns true after removal (305 milliseconds)
[info] - read locks are reentrant (1 millisecond)
[info] - multiple tasks can hold read locks (3 milliseconds)
[info] - single task can hold write lock (2 milliseconds)
[info] - cannot grab a writer lock while already holding a write lock (1 millisecond)
[info] - assertBlockIsLockedForWriting throws exception if block is not locked (1 millisecond)
[info] - downgrade lock (2 milliseconds)
[info] - write lock will block readers (303 milliseconds)
[info] - read locks will block writer (304 milliseconds)
[info] - removing a non-existent block throws IllegalArgumentException (2 milliseconds)
[info] - removing a block without holding any locks throws IllegalStateException (2 milliseconds)
[info] - removing a block while holding only a read lock throws IllegalStateException (9 milliseconds)
[info] - removing a block causes blocked callers to receive None (303 milliseconds)
[info] - releaseAllLocksForTask releases write locks (2 milliseconds)
[info] UISeleniumSuite:
[info] - Loop with sink PageRank (20 seconds, 117 milliseconds)
[info] TriangleCountSuite:
[info] - Count a single triangle (1 second, 34 milliseconds)
[info] - effects of unpersist() / persist() should be reflected (4 seconds, 492 milliseconds)
[info] - Count two triangles (776 milliseconds)
[info] - Count two triangles with bi-directed edges (932 milliseconds)
[info] - Count a single triangle with duplicate edges (712 milliseconds)
[info] - failed stages should not appear to be active (2 seconds, 187 milliseconds)
[info] StronglyConnectedComponentsSuite:
[info] - spark.ui.killEnabled should properly control kill button display (1 second, 172 milliseconds)
[info] - Island Strongly Connected Components (1 second, 265 milliseconds)
[info] - jobs page should not display job group name unless some job was submitted in a job group (930 milliseconds)
[info] - yarn-cluster should respect conf overrides in SparkHadoopUtil (SPARK-16414, SPARK-23630) (38 seconds, 298 milliseconds)
[info] - job progress bars should handle stage / task failures (1 second, 100 milliseconds)
[info] - recover from repeated node failures during shuffle-reduce (31 seconds, 296 milliseconds)
[info] - job details page should display useful information for stages that haven't started (543 milliseconds)
[info] - Cycle Strongly Connected Components (3 seconds, 310 milliseconds)
[info] - job progress bars / cells reflect skipped stages / tasks (957 milliseconds)
[info] - stages that aren't run appear as 'skipped stages' after a job finishes (1 second, 258 milliseconds)
[info] - jobs with stages that are skipped should show correct link descriptions on all jobs page (677 milliseconds)
[info] - attaching and detaching a new tab (603 milliseconds)
[info] - kill stage POST/GET response is correct (306 milliseconds)
[info] - 2 Cycle Strongly Connected Components (3 seconds, 106 milliseconds)
[info] VertexRDDSuite:
[info] - kill job POST/GET response is correct (250 milliseconds)
[info] - filter (502 milliseconds)
[info] - mapValues (604 milliseconds)
[info] - minus (388 milliseconds)
[info] - stage & job retention (1 second, 684 milliseconds)
[info] - minus with RDD[(VertexId, VD)] (387 milliseconds)
[info] - live UI json application list (726 milliseconds)
[info] - minus with non-equal number of partitions (903 milliseconds)
[info] - job stages should have expected dotfile under DAG visualization (528 milliseconds)
[info] - diff (666 milliseconds)
[info] - diff with RDD[(VertexId, VD)] (2 seconds, 526 milliseconds)
[info] - diff vertices with non-equal number of partitions (680 milliseconds)
[info] - recover from node failures with replication (11 seconds, 84 milliseconds)
[info] - leftJoin (600 milliseconds)
[info] - stages page should show skipped stages (4 seconds, 471 milliseconds)
[info] - leftJoin vertices with non-equal number of partitions (424 milliseconds)
[info] - Staleness of Spark UI should not last minutes or hours (493 milliseconds)
[info] LocalCheckpointSuite:
[info] - transform storage level (1 millisecond)
[info] - basic lineage truncation (47 milliseconds)
[info] - innerJoin (540 milliseconds)
[info] - basic lineage truncation - caching before checkpointing (41 milliseconds)
[info] - basic lineage truncation - caching after checkpointing (42 milliseconds)
[info] - indirect lineage truncation (55 milliseconds)
[info] - innerJoin vertices with the non-equal number of partitions (340 milliseconds)
[info] - indirect lineage truncation - caching before checkpointing (51 milliseconds)
[info] - indirect lineage truncation - caching after checkpointing (54 milliseconds)
[info] - aggregateUsingIndex (501 milliseconds)
[info] - mergeFunc (209 milliseconds)
[info] - cache, getStorageLevel (325 milliseconds)
[info] - checkpoint (866 milliseconds)
[info] - count (414 milliseconds)
[info] SVDPlusPlusSuite:
[info] - Test SVD++ with mean square error on training set (1 second, 349 milliseconds)
[info] - Test SVD++ with no edges (269 milliseconds)
[info] - unpersist RDDs (5 seconds, 311 milliseconds)
[info] GraphSuite:
[info] - Graph.fromEdgeTuples (273 milliseconds)
[info] - Graph.fromEdges (165 milliseconds)
[info] - Graph.apply (304 milliseconds)
[info] - triplets (419 milliseconds)
[info] - reference partitions inside a task (3 seconds, 681 milliseconds)
[info] ExecutorPodsSnapshotsStoreSuite:
[info] - Subscribers get notified of events periodically. (297 milliseconds)
[info] - Even without sending events, initially receive an empty buffer. (5 milliseconds)
[info] - Replacing the snapshot passes the new snapshot to subscribers. (4 milliseconds)
[info] KubernetesExecutorBuilderSuite:
[info] - Basic steps are consistently applied. (48 milliseconds)
[info] - Apply secrets step if secrets are present. (3 milliseconds)
[info] - Apply volumes step if mounts are present. (4 milliseconds)
[info] RDriverFeatureStepSuite:
[info] - R Step modifies container correctly (106 milliseconds)
[info] ExecutorPodsWatchSnapshotSourceSuite:
[info] - Watch events should be pushed to the snapshots store as snapshot updates. (159 milliseconds)
[info] JavaDriverFeatureStepSuite:
[info] - Java Step modifies container correctly (9 milliseconds)
[info] PythonDriverFeatureStepSuite:
[info] - Python Step modifies container correctly (7 milliseconds)
[info] - Python Step testing empty pyfiles (5 milliseconds)
[info] BasicExecutorFeatureStepSuite:
[info] - basic executor pod has reasonable defaults (53 milliseconds)
[info] - executor pod hostnames get truncated to 63 characters (5 milliseconds)
[info] - classpath and extra java options get translated into environment variables (6 milliseconds)
[info] - test executor pyspark memory (8 milliseconds)
[info] ExecutorPodsAllocatorSuite:
[info] - Initially request executors in batches. Do not request another batch if the first has not finished. (43 milliseconds)
[info] - Request executors in batches. Allow another batch to be requested if all pending executors start running. (28 milliseconds)
[info] - When a current batch reaches error states immediately, re-request them on the next batch. (37 milliseconds)
[info] - When an executor is requested but the API does not report it in a reasonable time, retry requesting that executor. (10 milliseconds)
[info] LocalDirsFeatureStepSuite:
[info] - Resolve to default local dir if neither env nor configuration are set (81 milliseconds)
[info] - Use configured local dirs split on comma if provided. (2 milliseconds)
[info] MountSecretsFeatureStepSuite:
[info] - mounts all given secrets (10 milliseconds)
[info] KubernetesDriverBuilderSuite:
[info] - Apply fundamental steps all the time. (18 milliseconds)
[info] - Apply secrets step if secrets are present. (5 milliseconds)
[info] - Apply Java step if main resource is none. (7 milliseconds)
[info] - Apply Python step if main resource is python. (4 milliseconds)
[info] - Apply volumes step if mounts are present. (5 milliseconds)
[info] - Apply R step if main resource is R. (5 milliseconds)
[info] KubernetesVolumeUtilsSuite:
[info] - Parses hostPath volumes correctly (8 milliseconds)
[info] - Parses persistentVolumeClaim volumes correctly (5 milliseconds)
[info] - Parses emptyDir volumes correctly (3 milliseconds)
[info] - Parses emptyDir volume options can be optional (2 milliseconds)
[info] - Defaults optional readOnly to false (2 milliseconds)
[info] - Gracefully fails on missing mount key (3 milliseconds)
[info] - Gracefully fails on missing option key (3 milliseconds)
[info] BasicDriverFeatureStepSuite:
[info] - Check the pod respects all configurations from the user. (18 milliseconds)
[info] - Check appropriate entrypoint rerouting for various bindings (5 milliseconds)
[info] - Additional system properties resolve jars and set cluster-mode confs. (8 milliseconds)
[info] ExecutorPodsSnapshotSuite:
[info] - States are interpreted correctly from pod metadata. (7 milliseconds)
[info] - Updates add new pods for non-matching ids and edit existing pods for matching ids (3 milliseconds)
[info] MountVolumesFeatureStepSuite:
[info] - Mounts hostPath volumes (11 milliseconds)
[info] - Mounts pesistentVolumeClaims (5 milliseconds)
[info] - Mounts emptyDir (2 milliseconds)
[info] - Mounts emptyDir with no options (6 milliseconds)
[info] - Mounts multiple volumes (3 milliseconds)
[info] DriverServiceFeatureStepSuite:
[info] - run Spark in yarn-client mode with additional jar (25 seconds, 42 milliseconds)
[info] - Headless service has a port for the driver RPC and the block manager. (25 milliseconds)
[info] - Hostname and ports are set according to the service name. (2 milliseconds)
[info] - Ports should resolve to defaults in SparkConf and in the service. (2 milliseconds)
[info] - Long prefixes should switch to using a generated name. (3 milliseconds)
[info] - Disallow bind address and driver host to be set explicitly. (2 milliseconds)
[info] KubernetesConfSuite:
[info] - Basic driver translated fields. (6 milliseconds)
[info] - Creating driver conf with and without the main app jar influences spark.jars (5 milliseconds)
[info] - Creating driver conf with a python primary file (3 milliseconds)
[info] - Creating driver conf with a r primary file (2 milliseconds)
[info] - Testing explicit setting of memory overhead on non-JVM tasks (2 milliseconds)
[info] - Resolve driver labels, annotations, secret mount paths, envs, and memory overhead (4 milliseconds)
[info] - Basic executor translated fields. (2 milliseconds)
[info] - Image pull secrets. (2 milliseconds)
[info] - Set executor labels, annotations, and secrets (5 milliseconds)
[info] EnvSecretsFeatureStepSuite:
[info] - sets up all keyRefs (6 milliseconds)
[info] KubernetesClusterSchedulerBackendSuite:
[info] - Start all components (6 milliseconds)
[info] - Stop all components (18 milliseconds)
[info] - Remove executor (4 milliseconds)
[info] - Kill executors (10 milliseconds)
[info] - Request total executors (3 milliseconds)
[info] DriverKubernetesCredentialsFeatureStepSuite:
[info] - Don't set any credentials (8 milliseconds)
[info] - Only set credentials that are manually mounted. (4 milliseconds)
[info] - Mount credentials from the submission client as a secret. (51 milliseconds)
[info] ClientSuite:
[info] - The client should configure the pod using the builder. (29 milliseconds)
[info] - The client should create Kubernetes resources (5 milliseconds)
[info] - Waiting for app completion should stall on the watcher (4 milliseconds)
[info] ExecutorPodsPollingSnapshotSourceSuite:
[info] - Items returned by the API should be pushed to the event queue (10 milliseconds)
[info] ExecutorPodsLifecycleManagerSuite:
[info] - When an executor reaches error states immediately, remove from the scheduler backend. (14 milliseconds)
[info] - Don't remove executors twice from Spark but remove from K8s repeatedly. (5 milliseconds)
[info] - When the scheduler backend lists executor ids that aren't present in the cluster, remove those executors from Spark. (15 milliseconds)
[info] KafkaStreamSuite:
[info] - partitionBy (11 seconds, 519 milliseconds)
[info] - mapVertices (296 milliseconds)
[info] - mapVertices changing type with same erased type (350 milliseconds)
[info] - mapEdges (149 milliseconds)
[info] - mapTriplets (395 milliseconds)
[info] - reverse (367 milliseconds)
[info] - reverse with join elimination (277 milliseconds)
[info] - subgraph (424 milliseconds)
[info] - mask (282 milliseconds)
[info] - checkpoint without draining iterator (18 seconds, 971 milliseconds)
[info] - groupEdges (463 milliseconds)
[info] - aggregateMessages (469 milliseconds)
[info] - Kafka input stream (4 seconds, 826 milliseconds)
[info] ReliableKafkaStreamSuite:
[info] - outerJoinVertices (861 milliseconds)
[info] - more edge partitions than vertex partitions (288 milliseconds)
[info] - checkpoint (389 milliseconds)
[info] - cache, getStorageLevel (59 milliseconds)
[info] - non-default number of edge partitions (328 milliseconds)
[info] - unpersist graph RDD (662 milliseconds)
[info] - Reliable Kafka input stream with single topic (1 second, 368 milliseconds)
[info] - SPARK-14219: pickRandomVertex (338 milliseconds)
[info] LabelPropagationSuite:
[info] - Reliable Kafka input stream with multiple topics (2 seconds, 198 milliseconds)
[info] - Label Propagation (3 seconds, 747 milliseconds)
[info] EdgePartitionSuite:
[info] - reverse (7 milliseconds)
[info] - map (2 milliseconds)
[info] - filter (1 millisecond)
[info] - groupEdges (2 milliseconds)
[info] - innerJoin (3 milliseconds)
[info] - isActive, numActives, replaceActives (1 millisecond)
[info] - tripletIterator (1 millisecond)
[info] - serialization (34 milliseconds)
[info] GraphLoaderSuite:
[info] - GraphLoader.edgeListFile (345 milliseconds)
[info] GraphGeneratorsSuite:
[info] - GraphGenerators.generateRandomEdges (3 milliseconds)
[info] - GraphGenerators.sampleLogNormal (10 milliseconds)
[info] KafkaClusterSuite:
[info] - GraphGenerators.logNormalGraph (444 milliseconds)
[info] - SPARK-5064 GraphGenerators.rmatGraph numEdges upper bound (146 milliseconds)
[info] - metadata apis (95 milliseconds)
[info] - leader offset apis (15 milliseconds)
[info] - consumer offset apis (422 milliseconds)
[info] KafkaDataConsumerSuite:
[info] KafkaRDDSuite:
[info] - basic usage (695 milliseconds)
[info] - iterator boundary conditions (918 milliseconds)
[info] DirectKafkaStreamSuite:
[info] - KafkaDataConsumer reuse in case of same groupId and TopicPartition (115 milliseconds)
[info] - basic stream receiving with multiple topics and smallest starting offset (2 seconds, 84 milliseconds)
[info] - receiving from largest starting offset (864 milliseconds)
[info] - run Spark in yarn-cluster mode with additional jar (24 seconds, 44 milliseconds)
[info] - creating stream by offset (808 milliseconds)
[info] - concurrent use of KafkaDataConsumer (3 seconds, 550 milliseconds)
[info] - checkpoint without draining iterator - caching before checkpointing (18 seconds, 324 milliseconds)
[info] - offset recovery (2 seconds, 956 milliseconds)
[info] KafkaRDDSuite:
[info] - Direct Kafka stream report input information (528 milliseconds)
[info] - maxMessagesPerPartition with backpressure disabled (69 milliseconds)
[info] - maxMessagesPerPartition with no lag (61 milliseconds)
[info] - maxMessagesPerPartition respects max rate (64 milliseconds)
[error] running /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.7/build/sbt -Phadoop-2.7 -Pkubernetes -Pflume -Phive-thriftserver -Pyarn -Pkafka-0-8 -Pspark-ganglia-lgpl -Pkinesis-asl -Phive -Pmesos test ; process was terminated by signal 9
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Finished: FAILURE