FailedConsole Output

Skipping 14,891 KB.. Full Log
j/Tagmf50wMjD4M7CWJ6SCJEgYhn6zEskT9nMS8dP3gkqLMvHTriiIGKaihyfl5xfk5qXrOEBpkDgMEMDIxMFQUlDDI2RQXJOYpFJdU5qTaKoEttlJQNjBwdjEwsFayAwAsE8VZpQAAAA==- streaming - write to kafka with topic field (1 second, 64 milliseconds)
[info] - streaming - write w/o topic field, with topic option (997 milliseconds)
[info] - streaming - topic field and topic option (944 milliseconds)
[info] - null topic attribute (554 milliseconds)
[info] - streaming - write data with bad schema (435 milliseconds)
[info] - streaming - write data with valid schema but wrong types (925 milliseconds)
[info] - streaming - write to non-existing topic (1 second, 484 milliseconds)
[info] - streaming - exception on config serializer (358 milliseconds)
[info] - generic - write big data with small producer buffer (51 seconds, 479 milliseconds)
[info] JsonUtilsSuite:
[info] - parsing partitions (2 milliseconds)
[info] - parsing partitionOffsets (0 milliseconds)
[info] KafkaSourceProviderSuite:
[info] - micro-batch mode - options should be handled as case-insensitive (945 milliseconds)
[info] - SPARK-28142 - continuous mode - options should be handled as case-insensitive (5 milliseconds)
[info] KafkaSinkStreamingSuite:
[info] - streaming - write to kafka with topic field (1 second, 95 milliseconds)
[info] - streaming - write aggregation w/o topic field, with topic option (1 second, 892 milliseconds)
[info] - streaming - aggregation with topic field and topic option (1 second, 654 milliseconds)
[info] - streaming - sink progress is produced (359 milliseconds)
[info] - streaming - write data with bad schema (379 milliseconds)
[info] - streaming - write data with valid schema but wrong types (353 milliseconds)
[info] - streaming - write to non-existing topic (5 seconds, 178 milliseconds)
[info] - streaming - exception on config serializer (158 milliseconds)
[info] KafkaSinkBatchSuiteV1:
[info] - batch - write to kafka (783 milliseconds)
[info] - batch - null topic field value, and no topic option (71 milliseconds)
[info] - SPARK-20496: batch - enforce analyzed plans (461 milliseconds)
[info] - batch - unsupported save modes (130 milliseconds)
[info] KafkaSinkBatchSuiteV2:
[info] - batch - write to kafka (739 milliseconds)
[info] - batch - null topic field value, and no topic option (56 milliseconds)
[info] - SPARK-20496: batch - enforce analyzed plans (506 milliseconds)
[info] - batch - unsupported save modes (158 milliseconds)
[info] - generic - write big data with small producer buffer (47 seconds, 476 milliseconds)
[info] KafkaOffsetRangeCalculatorSuite:
[info] - with no minPartition: N TopicPartitions to N offset ranges (1 millisecond)
[info] - with no minPartition: empty ranges ignored (1 millisecond)
[info] - with minPartition = 3: N TopicPartitions to N offset ranges (2 milliseconds)
[info] - with minPartition = 4: 1 TopicPartition to N offset ranges (1 millisecond)
[info] - with minPartition = 3: N skewed TopicPartitions to M offset ranges (0 milliseconds)
[info] - with minPartition = 3: range inexact multiple of minPartitions (0 milliseconds)
[info] - with minPartition = 3: empty ranges ignored (0 milliseconds)
[info] KafkaSparkConfSuite:
[info] - deprecated configs (1 millisecond)
[info] KafkaMicroBatchV1SourceSuite:
[info] - cannot stop Kafka stream (1 second, 142 milliseconds)
[info] - assign from latest offsets (failOnDataLoss: true) (2 seconds, 467 milliseconds)
[info] - assign from earliest offsets (failOnDataLoss: true) (2 seconds, 120 milliseconds)
[info] - assign from specific offsets (failOnDataLoss: true) (1 second, 687 milliseconds)
[info] - subscribing topic by name from latest offsets (failOnDataLoss: true) (2 seconds, 892 milliseconds)
[info] - subscribing topic by name from earliest offsets (failOnDataLoss: true) (2 seconds, 781 milliseconds)
[info] - subscribing topic by name from specific offsets (failOnDataLoss: true) (1 second, 705 milliseconds)
[info] - subscribing topic by pattern from latest offsets (failOnDataLoss: true) (3 seconds, 176 milliseconds)
[info] - subscribing topic by pattern from earliest offsets (failOnDataLoss: true) (2 seconds, 697 milliseconds)
[info] - subscribing topic by pattern from specific offsets (failOnDataLoss: true) (1 second, 802 milliseconds)
[info] - assign from latest offsets (failOnDataLoss: false) (2 seconds, 540 milliseconds)
[info] - assign from earliest offsets (failOnDataLoss: false) (2 seconds, 22 milliseconds)
[info] - assign from specific offsets (failOnDataLoss: false) (1 second, 561 milliseconds)
[info] - subscribing topic by name from latest offsets (failOnDataLoss: false) (3 seconds, 28 milliseconds)
[info] - subscribing topic by name from earliest offsets (failOnDataLoss: false) (2 seconds, 694 milliseconds)
[info] - subscribing topic by name from specific offsets (failOnDataLoss: false) (2 seconds, 441 milliseconds)
[info] - subscribing topic by pattern from latest offsets (failOnDataLoss: false) (3 seconds, 168 milliseconds)
[info] - subscribing topic by pattern from earliest offsets (failOnDataLoss: false) (2 seconds, 809 milliseconds)
[info] - subscribing topic by pattern from specific offsets (failOnDataLoss: false) (1 second, 752 milliseconds)
[info] - bad source options (15 milliseconds)
[info] - unsupported kafka configs (12 milliseconds)
[info] - get offsets from case insensitive parameters (0 milliseconds)
[info] - Kafka column types (796 milliseconds)
[info] - (de)serialization of initial offsets (677 milliseconds)
[info] - SPARK-26718 Rate limit set to Long.Max should not overflow integer during end offset calculation (1 second, 159 milliseconds)
[info] - maxOffsetsPerTrigger (4 seconds, 809 milliseconds)
[info] - input row metrics (1 second, 322 milliseconds)
[info] - subscribing topic by pattern with topic deletions (5 seconds, 849 milliseconds)
[info] - subscribe topic by pattern with topic recreation between batches (3 seconds, 424 milliseconds)
[info] - ensure that initial offset are written with an extra byte in the beginning (SPARK-19517) (511 milliseconds)
[info] - deserialization of initial offset written by Spark 2.1.0 (SPARK-19517) (979 milliseconds)
[info] - deserialization of initial offset written by future version (312 milliseconds)
[info] - KafkaSource with watermark (2 seconds, 208 milliseconds)
[info] - delete a topic when a Spark job is running (5 seconds, 207 milliseconds)
[info] - SPARK-22956: currentPartitionOffsets should be set when no new data comes in (4 seconds, 709 milliseconds)
[info] - allow group.id prefix (1 second, 343 milliseconds)
[info] - allow group.id override (1 second, 388 milliseconds)
[info] - ensure stream-stream self-join generates only one offset in log and correct metrics (15 seconds, 369 milliseconds)
[info] - read Kafka transactional messages: read_committed (16 seconds, 920 milliseconds)
[info] - read Kafka transactional messages: read_uncommitted (9 seconds, 501 milliseconds)
[info] - SPARK-25495: FetchedData.reset should reset all fields (2 seconds, 238 milliseconds)
[info] - SPARK-27494: read kafka record containing null key/values. (1 second, 225 milliseconds)
[info] - V1 Source is used when disabled through SQLConf (734 milliseconds)
[info] KafkaMicroBatchV2SourceSuite:
[info] - cannot stop Kafka stream (1 second, 60 milliseconds)
[info] - assign from latest offsets (failOnDataLoss: true) (2 seconds, 667 milliseconds)
[info] - assign from earliest offsets (failOnDataLoss: true) (2 seconds, 169 milliseconds)
[info] - assign from specific offsets (failOnDataLoss: true) (1 second, 500 milliseconds)
[info] - subscribing topic by name from latest offsets (failOnDataLoss: true) (3 seconds, 423 milliseconds)
[info] - subscribing topic by name from earliest offsets (failOnDataLoss: true) (2 seconds, 476 milliseconds)
[info] - subscribing topic by name from specific offsets (failOnDataLoss: true) (1 second, 624 milliseconds)
[info] - subscribing topic by pattern from latest offsets (failOnDataLoss: true) (3 seconds, 30 milliseconds)
[info] - subscribing topic by pattern from earliest offsets (failOnDataLoss: true) (2 seconds, 462 milliseconds)
[info] - subscribing topic by pattern from specific offsets (failOnDataLoss: true) (1 second, 736 milliseconds)
[info] - assign from latest offsets (failOnDataLoss: false) (2 seconds, 168 milliseconds)
[info] - assign from earliest offsets (failOnDataLoss: false) (2 seconds, 89 milliseconds)
[info] - assign from specific offsets (failOnDataLoss: false) (1 second, 522 milliseconds)
[info] - subscribing topic by name from latest offsets (failOnDataLoss: false) (2 seconds, 989 milliseconds)
[info] - subscribing topic by name from earliest offsets (failOnDataLoss: false) (2 seconds, 643 milliseconds)
[info] - subscribing topic by name from specific offsets (failOnDataLoss: false) (1 second, 695 milliseconds)
[info] - subscribing topic by pattern from latest offsets (failOnDataLoss: false) (3 seconds, 111 milliseconds)
[info] - subscribing topic by pattern from earliest offsets (failOnDataLoss: false) (2 seconds, 785 milliseconds)
[info] - subscribing topic by pattern from specific offsets (failOnDataLoss: false) (1 second, 727 milliseconds)
[info] - bad source options (11 milliseconds)
[info] - unsupported kafka configs (8 milliseconds)
[info] - get offsets from case insensitive parameters (0 milliseconds)
[info] - Kafka column types (1 second, 41 milliseconds)
[info] - (de)serialization of initial offsets (642 milliseconds)
[info] - SPARK-26718 Rate limit set to Long.Max should not overflow integer during end offset calculation (1 second, 177 milliseconds)
[info] - maxOffsetsPerTrigger (3 seconds, 957 milliseconds)
[info] - input row metrics (1 second, 255 milliseconds)
[info] - subscribing topic by pattern with topic deletions (5 seconds, 888 milliseconds)
[info] - subscribe topic by pattern with topic recreation between batches (3 seconds, 831 milliseconds)
[info] - ensure that initial offset are written with an extra byte in the beginning (SPARK-19517) (572 milliseconds)
[info] - deserialization of initial offset written by Spark 2.1.0 (SPARK-19517) (1 second, 40 milliseconds)
[info] - deserialization of initial offset written by future version (345 milliseconds)
[info] - KafkaSource with watermark (1 second, 477 milliseconds)
[info] - delete a topic when a Spark job is running (5 seconds, 845 milliseconds)
[info] - SPARK-22956: currentPartitionOffsets should be set when no new data comes in (4 seconds, 288 milliseconds)
[info] - allow group.id prefix (1 second, 311 milliseconds)
[info] - allow group.id override (1 second, 310 milliseconds)
[info] - ensure stream-stream self-join generates only one offset in log and correct metrics (15 seconds, 665 milliseconds)
[info] - read Kafka transactional messages: read_committed (16 seconds, 841 milliseconds)
[info] - read Kafka transactional messages: read_uncommitted (10 seconds, 52 milliseconds)
[info] - SPARK-25495: FetchedData.reset should reset all fields (2 seconds, 453 milliseconds)
[info] - SPARK-27494: read kafka record containing null key/values. (1 second, 208 milliseconds)
[info] - V2 Source is used by default (749 milliseconds)
[info] - minPartitions is supported (225 milliseconds)
[info] KafkaSourceStressSuite:
[info] - stress test with multiple topics and partitions (38 seconds, 585 milliseconds)
[info] CachedKafkaProducerSuite:
[info] - Should return the cached instance on calling getOrCreate with same params. (4 milliseconds)
[info] - Should close the correct kafka producer for the given kafkaPrams. (5 milliseconds)
[info] KafkaSourceOffsetSuite:
[info] - comparison {"t":{"0":1}} <=> {"t":{"0":2}} (0 milliseconds)
[info] - comparison {"t":{"1":0,"0":1}} <=> {"t":{"1":1,"0":2}} (0 milliseconds)
[info] - comparison {"t":{"0":1},"T":{"0":0}} <=> {"t":{"0":2},"T":{"0":1}} (0 milliseconds)
[info] - comparison {"t":{"0":1}} <=> {"t":{"1":1,"0":2}} (0 milliseconds)
[info] - comparison {"t":{"0":1}} <=> {"t":{"1":3,"0":2}} (0 milliseconds)
[info] - basic serialization - deserialization (1 millisecond)
[info] - OffsetSeqLog serialization - deserialization (105 milliseconds)
[info] - read Spark 2.1.0 offset format (2 milliseconds)
[info] ScalaTest
[info] Run completed in 13 minutes, 26 seconds.
[info] Total number of tests run: 197
[info] Suites: completed 21, aborted 0
[info] Tests: succeeded 197, failed 0, canceled 0, ignored 0, pending 0
[info] All tests passed.
[info] Passed: Total 197, Failed 0, Errors 0, Passed 197
[success] Total time: 830 s, completed Jul 20, 2019 11:55:51 PM
[info] Updating {file:/home/jenkins/workspace/SparkPullRequestBuilder@2/}examples...
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* org.apache.thrift:libthrift:0.12.0 is selected over 0.9.3
[warn] 	    +- org.apache.spark:spark-hive_2.12:3.0.0-SNAPSHOT    (depends on 0.9.3)
[warn] 	    +- org.apache.thrift:libfb303:0.9.3                   (depends on 0.9.3)
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.12:3.0.0-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.7.4                (depends on 3.6.2.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 
[warn] 	* io.netty:netty-all:4.1.30.Final is selected over 4.0.23.Final
[warn] 	    +- org.apache.spark:spark-core_2.12:3.0.0-SNAPSHOT    (depends on 4.0.23.Final)
[warn] 	    +- org.apache.spark:spark-network-common_2.12:3.0.0-SNAPSHOT (depends on 4.0.23.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.7.4                (depends on 4.0.23.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/SparkPullRequestBuilder@2/core/target/scala-2.12/spark-core_2.12-3.0.0-SNAPSHOT.jar ...
[info] Done packaging.
[warn] /home/jenkins/workspace/SparkPullRequestBuilder@2/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/parquet/ParquetWriteBuilder.scala:91: value ENABLE_JOB_SUMMARY in class ParquetOutputFormat is deprecated: see corresponding Javadoc for more information.
[warn]       && conf.get(ParquetOutputFormat.ENABLE_JOB_SUMMARY) == null) {
[warn] 
[warn] /home/jenkins/workspace/SparkPullRequestBuilder@2/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:128: value ENABLE_JOB_SUMMARY in class ParquetOutputFormat is deprecated: see corresponding Javadoc for more information.
[warn]       && conf.get(ParquetOutputFormat.ENABLE_JOB_SUMMARY) == null) {
[warn] 
[warn] /home/jenkins/workspace/SparkPullRequestBuilder@2/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:262: class ParquetInputSplit in package hadoop is deprecated: see corresponding Javadoc for more information.
[warn]         new org.apache.parquet.hadoop.ParquetInputSplit(
[warn] 
[warn] /home/jenkins/workspace/SparkPullRequestBuilder@2/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:273: method readFooter in class ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]         ParquetFileReader.readFooter(sharedConf, filePath, SKIP_ROW_GROUPS).getFileMetaData
[warn] 
[warn] /home/jenkins/workspace/SparkPullRequestBuilder@2/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:447: method readFooter in class ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]           ParquetFileReader.readFooter(
[warn] 
[warn] /home/jenkins/workspace/SparkPullRequestBuilder@2/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/parquet/ParquetPartitionReaderFactory.scala:121: class ParquetInputSplit in package hadoop is deprecated: see corresponding Javadoc for more information.
[warn]           Option[TimeZone]) => RecordReader[Void, T]): RecordReader[Void, T] = {
[warn] 
[warn] /home/jenkins/workspace/SparkPullRequestBuilder@2/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/parquet/ParquetPartitionReaderFactory.scala:126: class ParquetInputSplit in package hadoop is deprecated: see corresponding Javadoc for more information.
[warn]       new org.apache.parquet.hadoop.ParquetInputSplit(
[warn] 
[warn] /home/jenkins/workspace/SparkPullRequestBuilder@2/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/parquet/ParquetPartitionReaderFactory.scala:135: method readFooter in class ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]       ParquetFileReader.readFooter(conf, filePath, SKIP_ROW_GROUPS).getFileMetaData
[warn] 
[warn] /home/jenkins/workspace/SparkPullRequestBuilder@2/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/parquet/ParquetPartitionReaderFactory.scala:184: class ParquetInputSplit in package hadoop is deprecated: see corresponding Javadoc for more information.
[warn]       split: ParquetInputSplit,
[warn] 
[warn] /home/jenkins/workspace/SparkPullRequestBuilder@2/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/parquet/ParquetPartitionReaderFactory.scala:213: class ParquetInputSplit in package hadoop is deprecated: see corresponding Javadoc for more information.
[warn]       split: ParquetInputSplit,
[warn] 
[warn] /home/jenkins/workspace/SparkPullRequestBuilder@2/sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala:283: class WriteToDataSourceV2 in package v2 is deprecated (since 2.4.0): Use specific logical plans like AppendData instead
[warn]               WriteToDataSourceV2(write, df.logicalPlan)
[warn] 
[warn] /home/jenkins/workspace/SparkPullRequestBuilder@2/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/sources/WriteToMicroBatchDataSource.scala:36: class WriteToDataSourceV2 in package v2 is deprecated (since 2.4.0): Use specific logical plans like AppendData instead
[warn]   def createPlan(batchId: Long): WriteToDataSourceV2 = {
[warn] 
[warn] /home/jenkins/workspace/SparkPullRequestBuilder@2/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/sources/WriteToMicroBatchDataSource.scala:37: class WriteToDataSourceV2 in package v2 is deprecated (since 2.4.0): Use specific logical plans like AppendData instead
[warn]     WriteToDataSourceV2(new MicroBatchWrite(batchId, write), query)
[warn] 
[warn] /home/jenkins/workspace/SparkPullRequestBuilder@2/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaDataConsumer.scala:474: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     val p = consumer.poll(pollTimeoutMs)
[warn] 
[warn] /home/jenkins/workspace/SparkPullRequestBuilder@2/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:119: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]     consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/SparkPullRequestBuilder@2/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:167: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/SparkPullRequestBuilder@2/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:215: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]       consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/SparkPullRequestBuilder@2/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:245: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]       consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/SparkPullRequestBuilder@2/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:321: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]           consumer.poll(0)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] /home/jenkins/workspace/SparkPullRequestBuilder@2/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/DirectKafkaInputDStream.scala:171: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]     val msgs = c.poll(0)
[warn] 
[warn] /home/jenkins/workspace/SparkPullRequestBuilder@2/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaDataConsumer.scala:206: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     val p = consumer.poll(timeout)
[warn] 
[warn] /home/jenkins/workspace/SparkPullRequestBuilder@2/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:108: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/SparkPullRequestBuilder@2/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:162: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] /home/jenkins/workspace/SparkPullRequestBuilder@2/examples/src/main/scala/org/apache/spark/examples/ml/GradientBoostedTreeClassifierExample.scala:68: method labels in class StringIndexerModel is deprecated (since 3.0.0): `labels` is deprecated and will be removed in 3.1.0. Use `labelsArray` instead.
[warn]       .setLabels(labelIndexer.labels)
[warn] 
[warn] /home/jenkins/workspace/SparkPullRequestBuilder@2/examples/src/main/scala/org/apache/spark/examples/ml/RandomForestClassifierExample.scala:67: method labels in class StringIndexerModel is deprecated (since 3.0.0): `labels` is deprecated and will be removed in 3.1.0. Use `labelsArray` instead.
[warn]       .setLabels(labelIndexer.labels)
[warn] 
[warn] /home/jenkins/workspace/SparkPullRequestBuilder@2/examples/src/main/scala/org/apache/spark/examples/ml/DecisionTreeClassificationExample.scala:65: method labels in class StringIndexerModel is deprecated (since 3.0.0): `labels` is deprecated and will be removed in 3.1.0. Use `labelsArray` instead.
[warn]       .setLabels(labelIndexer.labels)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/SparkPullRequestBuilder@2/examples/target/scala-2.12/jars/spark-examples_2.12-3.0.0-SNAPSHOT.jar ...
[info] Done packaging.
[info] ScalaTest
[info] Run completed in 18 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[success] Total time: 10 s, completed Jul 20, 2019 11:56:01 PM

========================================================================
Running PySpark tests
========================================================================
Running PySpark tests. Output is in /home/jenkins/workspace/SparkPullRequestBuilder@2/python/unit-tests.log
Will test against the following Python executables: ['python2.7', 'python3.6', 'pypy']
Will test the following Python modules: ['pyspark-sql', 'pyspark-mllib', 'pyspark-ml']
Starting test(pypy): pyspark.sql.tests.test_appsubmit
Starting test(pypy): pyspark.sql.tests.test_conf
Starting test(pypy): pyspark.sql.tests.test_arrow
Starting test(pypy): pyspark.sql.tests.test_column
Starting test(pypy): pyspark.sql.tests.test_context
Starting test(pypy): pyspark.sql.tests.test_dataframe
Starting test(pypy): pyspark.sql.tests.test_datasources
Starting test(pypy): pyspark.sql.tests.test_catalog
Finished test(pypy): pyspark.sql.tests.test_arrow (0s) ... 52 tests were skipped
Starting test(pypy): pyspark.sql.tests.test_functions
Finished test(pypy): pyspark.sql.tests.test_conf (9s)
Starting test(pypy): pyspark.sql.tests.test_group
Finished test(pypy): pyspark.sql.tests.test_column (20s)
Starting test(pypy): pyspark.sql.tests.test_pandas_udf
Finished test(pypy): pyspark.sql.tests.test_pandas_udf (0s) ... 6 tests were skipped
Starting test(pypy): pyspark.sql.tests.test_pandas_udf_grouped_agg
Finished test(pypy): pyspark.sql.tests.test_catalog (21s)
Starting test(pypy): pyspark.sql.tests.test_pandas_udf_grouped_map
Finished test(pypy): pyspark.sql.tests.test_pandas_udf_grouped_agg (0s) ... 14 tests were skipped
Starting test(pypy): pyspark.sql.tests.test_pandas_udf_scalar
Finished test(pypy): pyspark.sql.tests.test_pandas_udf_grouped_map (0s) ... 18 tests were skipped
Starting test(pypy): pyspark.sql.tests.test_pandas_udf_window
Finished test(pypy): pyspark.sql.tests.test_pandas_udf_scalar (0s) ... 49 tests were skipped
Starting test(pypy): pyspark.sql.tests.test_readwriter
Finished test(pypy): pyspark.sql.tests.test_pandas_udf_window (0s) ... 14 tests were skipped
Starting test(pypy): pyspark.sql.tests.test_serde
Finished test(pypy): pyspark.sql.tests.test_datasources (27s)
Starting test(pypy): pyspark.sql.tests.test_session
Finished test(pypy): pyspark.sql.tests.test_group (19s)
Starting test(pypy): pyspark.sql.tests.test_streaming
Finished test(pypy): pyspark.sql.tests.test_functions (33s)
Starting test(pypy): pyspark.sql.tests.test_types
Finished test(pypy): pyspark.sql.tests.test_dataframe (41s) ... 5 tests were skipped
Starting test(pypy): pyspark.sql.tests.test_udf
Finished test(pypy): pyspark.sql.tests.test_serde (22s)
Starting test(pypy): pyspark.sql.tests.test_utils
Finished test(pypy): pyspark.sql.tests.test_utils (10s)
Starting test(python2.7): pyspark.ml.tests.test_algorithms
Finished test(pypy): pyspark.sql.tests.test_readwriter (34s)
Starting test(python2.7): pyspark.ml.tests.test_base
Finished test(pypy): pyspark.sql.tests.test_session (30s)
Starting test(python2.7): pyspark.ml.tests.test_evaluation
Finished test(pypy): pyspark.sql.tests.test_context (65s)
Starting test(python2.7): pyspark.ml.tests.test_feature
Finished test(python2.7): pyspark.ml.tests.test_base (14s)
Starting test(python2.7): pyspark.ml.tests.test_image
Finished test(python2.7): pyspark.ml.tests.test_evaluation (18s)
Starting test(python2.7): pyspark.ml.tests.test_linalg
Finished test(pypy): pyspark.sql.tests.test_streaming (51s)
Starting test(python2.7): pyspark.ml.tests.test_param
Finished test(pypy): pyspark.sql.tests.test_types (46s)
Starting test(python2.7): pyspark.ml.tests.test_persistence
Finished test(python2.7): pyspark.ml.tests.test_image (18s)
Starting test(python2.7): pyspark.ml.tests.test_pipeline
Finished test(pypy): pyspark.sql.tests.test_udf (53s)
Starting test(python2.7): pyspark.ml.tests.test_stat
Finished test(python2.7): pyspark.ml.tests.test_pipeline (5s)
Starting test(python2.7): pyspark.ml.tests.test_training_summary
Finished test(python2.7): pyspark.ml.tests.test_param (20s)
Starting test(python2.7): pyspark.ml.tests.test_tuning
Finished test(python2.7): pyspark.ml.tests.test_feature (34s)
Starting test(python2.7): pyspark.ml.tests.test_wrapper
Finished test(python2.7): pyspark.ml.tests.test_linalg (33s)
Starting test(python2.7): pyspark.mllib.tests.test_algorithms
Finished test(python2.7): pyspark.ml.tests.test_stat (16s)
Starting test(python2.7): pyspark.mllib.tests.test_feature
Finished test(python2.7): pyspark.ml.tests.test_wrapper (17s)
Starting test(python2.7): pyspark.mllib.tests.test_linalg
Finished test(python2.7): pyspark.ml.tests.test_persistence (44s)
Starting test(python2.7): pyspark.mllib.tests.test_stat
Finished test(python2.7): pyspark.ml.tests.test_training_summary (32s)
Starting test(python2.7): pyspark.mllib.tests.test_streaming_algorithms
Finished test(python2.7): pyspark.mllib.tests.test_feature (31s)
Starting test(python2.7): pyspark.mllib.tests.test_util
Finished test(python2.7): pyspark.ml.tests.test_algorithms (88s)
Starting test(python2.7): pyspark.sql.tests.test_appsubmit
Finished test(python2.7): pyspark.mllib.tests.test_stat (26s)
Starting test(python2.7): pyspark.sql.tests.test_arrow
Finished test(python2.7): pyspark.sql.tests.test_arrow (0s) ... 52 tests were skipped
Starting test(python2.7): pyspark.sql.tests.test_catalog
Finished test(python2.7): pyspark.mllib.tests.test_util (12s)
Starting test(python2.7): pyspark.sql.tests.test_column
Finished test(python2.7): pyspark.sql.tests.test_catalog (20s)
Starting test(python2.7): pyspark.sql.tests.test_conf
Finished test(python2.7): pyspark.sql.tests.test_column (19s)
Starting test(python2.7): pyspark.sql.tests.test_context
Finished test(python2.7): pyspark.mllib.tests.test_algorithms (68s)
Starting test(python2.7): pyspark.sql.tests.test_dataframe
Finished test(python2.7): pyspark.mllib.tests.test_linalg (62s)
Starting test(python2.7): pyspark.sql.tests.test_datasources
Finished test(python2.7): pyspark.sql.tests.test_conf (9s)
Starting test(python2.7): pyspark.sql.tests.test_functions
Finished test(python2.7): pyspark.sql.tests.test_datasources (26s)
Starting test(python2.7): pyspark.sql.tests.test_group
Finished test(python2.7): pyspark.sql.tests.test_functions (33s)
Starting test(python2.7): pyspark.sql.tests.test_pandas_udf
Finished test(python2.7): pyspark.sql.tests.test_pandas_udf (0s) ... 6 tests were skipped
Starting test(python2.7): pyspark.sql.tests.test_pandas_udf_grouped_agg
Finished test(python2.7): pyspark.sql.tests.test_pandas_udf_grouped_agg (0s) ... 14 tests were skipped
Starting test(python2.7): pyspark.sql.tests.test_pandas_udf_grouped_map
Finished test(python2.7): pyspark.sql.tests.test_pandas_udf_grouped_map (0s) ... 18 tests were skipped
Starting test(python2.7): pyspark.sql.tests.test_pandas_udf_scalar
Finished test(python2.7): pyspark.sql.tests.test_pandas_udf_scalar (1s) ... 49 tests were skipped
Starting test(python2.7): pyspark.sql.tests.test_pandas_udf_window
Finished test(python2.7): pyspark.sql.tests.test_pandas_udf_window (1s) ... 14 tests were skipped
Starting test(python2.7): pyspark.sql.tests.test_readwriter
Finished test(python2.7): pyspark.ml.tests.test_tuning (123s)
Starting test(python2.7): pyspark.sql.tests.test_serde
Finished test(python2.7): pyspark.sql.tests.test_group (18s)
Starting test(python2.7): pyspark.sql.tests.test_session
Finished test(python2.7): pyspark.sql.tests.test_dataframe (47s) ... 5 tests were skipped
Starting test(python2.7): pyspark.sql.tests.test_streaming
Finished test(python2.7): pyspark.sql.tests.test_context (59s)
Starting test(python2.7): pyspark.sql.tests.test_types
Finished test(pypy): pyspark.sql.tests.test_appsubmit (246s)
Starting test(python2.7): pyspark.sql.tests.test_udf
Finished test(python2.7): pyspark.sql.tests.test_serde (23s)
Starting test(python2.7): pyspark.sql.tests.test_utils
Finished test(python2.7): pyspark.sql.tests.test_session (29s)
Starting test(python3.6): pyspark.ml.tests.test_algorithms
Finished test(python2.7): pyspark.sql.tests.test_readwriter (36s)
Starting test(python3.6): pyspark.ml.tests.test_base
Finished test(python2.7): pyspark.sql.tests.test_utils (12s)
Starting test(python3.6): pyspark.ml.tests.test_evaluation
Finished test(python2.7): pyspark.mllib.tests.test_streaming_algorithms (132s)
Starting test(python3.6): pyspark.ml.tests.test_feature
Finished test(python3.6): pyspark.ml.tests.test_base (16s)
Starting test(python3.6): pyspark.ml.tests.test_image
Finished test(python2.7): pyspark.sql.tests.test_streaming (50s)
Starting test(python3.6): pyspark.ml.tests.test_linalg
Finished test(python3.6): pyspark.ml.tests.test_evaluation (18s)
Starting test(python3.6): pyspark.ml.tests.test_param
Finished test(python2.7): pyspark.sql.tests.test_types (51s)
Starting test(python3.6): pyspark.ml.tests.test_persistence
Finished test(python3.6): pyspark.ml.tests.test_image (19s)
Starting test(python3.6): pyspark.ml.tests.test_pipeline
Finished test(python3.6): pyspark.ml.tests.test_feature (34s)
Starting test(python3.6): pyspark.ml.tests.test_stat
Finished test(python3.6): pyspark.ml.tests.test_param (20s)
Starting test(python3.6): pyspark.ml.tests.test_training_summary
Finished test(python3.6): pyspark.ml.tests.test_pipeline (5s)
Starting test(python3.6): pyspark.ml.tests.test_tuning
Finished test(python2.7): pyspark.sql.tests.test_udf (55s)
Starting test(python3.6): pyspark.ml.tests.test_wrapper
Finished test(python3.6): pyspark.ml.tests.test_linalg (35s)
Starting test(python3.6): pyspark.mllib.tests.test_algorithms
Finished test(python3.6): pyspark.ml.tests.test_stat (16s)
Starting test(python3.6): pyspark.mllib.tests.test_feature
Finished test(python3.6): pyspark.ml.tests.test_wrapper (19s)
Starting test(python3.6): pyspark.mllib.tests.test_linalg
Finished test(python3.6): pyspark.ml.tests.test_persistence (45s)
Starting test(python3.6): pyspark.mllib.tests.test_stat
Finished test(python3.6): pyspark.ml.tests.test_training_summary (33s)
Starting test(python3.6): pyspark.mllib.tests.test_streaming_algorithms
Finished test(python3.6): pyspark.ml.tests.test_algorithms (90s)
Starting test(python3.6): pyspark.mllib.tests.test_util
Finished test(python3.6): pyspark.mllib.tests.test_feature (33s)
Starting test(python3.6): pyspark.sql.tests.test_appsubmit
Finished test(python3.6): pyspark.mllib.tests.test_util (13s)
Starting test(python3.6): pyspark.sql.tests.test_arrow
Finished test(python3.6): pyspark.mllib.tests.test_stat (29s)
Starting test(python3.6): pyspark.sql.tests.test_catalog
Finished test(python2.7): pyspark.sql.tests.test_appsubmit (227s)
Starting test(python3.6): pyspark.sql.tests.test_column
Finished test(python3.6): pyspark.sql.tests.test_catalog (19s)
Starting test(python3.6): pyspark.sql.tests.test_conf
Finished test(python3.6): pyspark.mllib.tests.test_algorithms (73s)
Starting test(python3.6): pyspark.sql.tests.test_context
Finished test(python3.6): pyspark.mllib.tests.test_linalg (65s)
Starting test(python3.6): pyspark.sql.tests.test_dataframe
Finished test(python3.6): pyspark.sql.tests.test_conf (10s)
Starting test(python3.6): pyspark.sql.tests.test_datasources
Finished test(python3.6): pyspark.sql.tests.test_arrow (30s)
Starting test(python3.6): pyspark.sql.tests.test_functions
Finished test(python3.6): pyspark.sql.tests.test_column (19s)
Starting test(python3.6): pyspark.sql.tests.test_group
Finished test(python3.6): pyspark.sql.tests.test_group (18s)
Starting test(python3.6): pyspark.sql.tests.test_pandas_udf
Finished test(python3.6): pyspark.sql.tests.test_datasources (29s)
Starting test(python3.6): pyspark.sql.tests.test_pandas_udf_grouped_agg
Finished test(python3.6): pyspark.sql.tests.test_functions (36s)
Starting test(python3.6): pyspark.sql.tests.test_pandas_udf_grouped_map
Finished test(python3.6): pyspark.ml.tests.test_tuning (129s)
Starting test(python3.6): pyspark.sql.tests.test_pandas_udf_scalar
Finished test(python3.6): pyspark.sql.tests.test_pandas_udf (29s)
Starting test(python3.6): pyspark.sql.tests.test_pandas_udf_window
Finished test(python3.6): pyspark.sql.tests.test_dataframe (54s) ... 2 tests were skipped
Starting test(python3.6): pyspark.sql.tests.test_readwriter
Finished test(python3.6): pyspark.sql.tests.test_context (71s)
Starting test(python3.6): pyspark.sql.tests.test_serde
Finished test(python3.6): pyspark.sql.tests.test_serde (32s)
Starting test(python3.6): pyspark.sql.tests.test_session
Finished test(python3.6): pyspark.sql.tests.test_pandas_udf_grouped_map (62s)
Starting test(python3.6): pyspark.sql.tests.test_streaming
Finished test(python3.6): pyspark.sql.tests.test_readwriter (50s)
Starting test(python3.6): pyspark.sql.tests.test_types
Finished test(python3.6): pyspark.mllib.tests.test_streaming_algorithms (165s)
Starting test(python3.6): pyspark.sql.tests.test_udf
Finished test(python3.6): pyspark.sql.tests.test_pandas_udf_window (63s)
Starting test(python3.6): pyspark.sql.tests.test_utils
Finished test(python3.6): pyspark.sql.tests.test_utils (14s)
Starting test(pypy): pyspark.sql.avro.functions
Finished test(python3.6): pyspark.sql.tests.test_pandas_udf_scalar (94s)
Starting test(pypy): pyspark.sql.catalog
Finished test(python3.6): pyspark.sql.tests.test_session (34s)
Starting test(pypy): pyspark.sql.column
Finished test(pypy): pyspark.sql.avro.functions (19s)
Starting test(pypy): pyspark.sql.conf
Finished test(python3.6): pyspark.sql.tests.test_pandas_udf_grouped_agg (117s)
Starting test(pypy): pyspark.sql.context
Attempting to post to Github...
[error] running /home/jenkins/workspace/SparkPullRequestBuilder@2/python/run-tests --modules=pyspark-sql,pyspark-mllib,pyspark-ml --parallelism=8 ; process was terminated by signal 9
 > Post successful.
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Test FAILed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/107953/
Test FAILed.
Finished: FAILURE