Console Output

Skipping 33,343 KB.. Full Log
[info] KafkaRelationSuiteV2:
[info] - explicit earliest to latest offsets (4 seconds, 770 milliseconds)
[info] - default starting and ending offsets (2 seconds, 377 milliseconds)
[info] - explicit offsets (6 seconds, 650 milliseconds)
[info] - default starting and ending offsets with headers (2 seconds, 391 milliseconds)
[info] - timestamp provided for starting and ending (2 seconds, 392 milliseconds)
[info] - timestamp provided for starting, offset provided for ending (2 seconds, 461 milliseconds)
[info] - timestamp provided for ending, offset provided for starting (2 seconds, 420 milliseconds)
[info] - timestamp provided for starting, ending not provided (2 seconds, 438 milliseconds)
[info] - timestamp provided for ending, starting not provided (2 seconds, 261 milliseconds)
[info] - no matched offset for timestamp - startingOffsets (3 seconds, 168 milliseconds)
[info] - no matched offset for timestamp - endingOffsets (2 seconds, 395 milliseconds)
[info] - reuse same dataframe in query (1 second, 397 milliseconds)
[info] - test late binding start offsets (8 seconds, 484 milliseconds)
[info] - bad batch query options (48 milliseconds)
[info] - read Kafka transactional messages: read_committed (3 seconds, 931 milliseconds)
[info] - read Kafka transactional messages: read_uncommitted (4 seconds, 611 milliseconds)
[info] - SPARK-30656: minPartitions (4 seconds, 312 milliseconds)
[info] - V2 Source is used when set through SQLConf (7 milliseconds)
[info] KafkaMicroBatchV1SourceSuite:
[info] - cannot stop Kafka stream (607 milliseconds)
[info] - assign from latest offsets (failOnDataLoss: true) (5 seconds, 538 milliseconds)
[info] - assign from earliest offsets (failOnDataLoss: true) (5 seconds, 11 milliseconds)
[info] - assign from specific offsets (failOnDataLoss: true) (3 seconds, 477 milliseconds)
[info] - assign from specific timestamps (failOnDataLoss: true) (5 seconds, 182 milliseconds)
[info] - subscribing topic by name from latest offsets (failOnDataLoss: true) (5 seconds, 159 milliseconds)
[info] - subscribing topic by name from earliest offsets (failOnDataLoss: true) (6 seconds, 205 milliseconds)
[info] - subscribing topic by name from specific offsets (failOnDataLoss: true) (3 seconds, 314 milliseconds)
[info] - subscribing topic by name from specific timestamps (failOnDataLoss: true) (4 seconds, 750 milliseconds)
[info] - subscribing topic by pattern from latest offsets (failOnDataLoss: true) (5 seconds, 862 milliseconds)
[info] - subscribing topic by pattern from earliest offsets (failOnDataLoss: true) (6 seconds, 290 milliseconds)
[info] - subscribing topic by pattern from specific offsets (failOnDataLoss: true) (2 seconds, 690 milliseconds)
[info] - subscribing topic by pattern from specific timestamps (failOnDataLoss: true) (4 seconds, 616 milliseconds)
[info] - assign from latest offsets (failOnDataLoss: false) (5 seconds, 205 milliseconds)
[info] - assign from earliest offsets (failOnDataLoss: false) (5 seconds, 178 milliseconds)
[info] - assign from specific offsets (failOnDataLoss: false) (2 seconds, 837 milliseconds)
[info] - assign from specific timestamps (failOnDataLoss: false) (4 seconds, 550 milliseconds)
[info] - subscribing topic by name from latest offsets (failOnDataLoss: false) (5 seconds, 951 milliseconds)
[info] - subscribing topic by name from earliest offsets (failOnDataLoss: false) (5 seconds, 390 milliseconds)
[info] - subscribing topic by name from specific offsets (failOnDataLoss: false) (2 seconds, 868 milliseconds)
[info] - subscribing topic by name from specific timestamps (failOnDataLoss: false) (5 seconds, 2 milliseconds)
[info] - subscribing topic by pattern from latest offsets (failOnDataLoss: false) (5 seconds, 979 milliseconds)
[info] - subscribing topic by pattern from earliest offsets (failOnDataLoss: false) (5 seconds, 770 milliseconds)
[info] - subscribing topic by pattern from specific offsets (failOnDataLoss: false) (2 seconds, 945 milliseconds)
[info] - subscribing topic by pattern from specific timestamps (failOnDataLoss: false) (5 seconds, 461 milliseconds)
[info] - bad source options (14 milliseconds)
[info] - unsupported kafka configs (9 milliseconds)
[info] - get offsets from case insensitive parameters (1 millisecond)
[info] - Kafka column types (1 second, 92 milliseconds)
[info] - (de)serialization of initial offsets (509 milliseconds)
[info] - SPARK-26718 Rate limit set to Long.Max should not overflow integer during end offset calculation (1 second, 497 milliseconds)
[info] - maxOffsetsPerTrigger (5 seconds, 800 milliseconds)
[info] - input row metrics (1 second, 797 milliseconds)
[info] - subscribing topic by pattern with topic deletions (6 seconds, 840 milliseconds)
[info] - subscribe topic by pattern with topic recreation between batches (3 seconds, 309 milliseconds)
[info] - ensure that initial offset are written with an extra byte in the beginning (SPARK-19517) (441 milliseconds)
[info] - deserialization of initial offset written by Spark 2.1.0 (SPARK-19517) (2 seconds, 294 milliseconds)
[info] - deserialization of initial offset written by future version (201 milliseconds)
[info] - KafkaSource with watermark (1 second, 551 milliseconds)
[info] - delete a topic when a Spark job is running (3 seconds, 537 milliseconds)
[info] - SPARK-22956: currentPartitionOffsets should be set when no new data comes in (2 seconds, 405 milliseconds)
[info] - allow group.id prefix (1 second, 476 milliseconds)
[info] - allow group.id override (1 second, 559 milliseconds)
[info] - ensure stream-stream self-join generates only one offset in log and correct metrics (13 seconds, 528 milliseconds)
[info] - read Kafka transactional messages: read_committed (14 seconds, 404 milliseconds)
[info] - read Kafka transactional messages: read_uncommitted (9 seconds, 662 milliseconds)
[info] - SPARK-25495: FetchedData.reset should reset all fields (1 second, 804 milliseconds)
[info] - SPARK-27494: read kafka record containing null key/values. (916 milliseconds)
[info] - SPARK-30656: minPartitions (1 second, 339 milliseconds)
[info] - V1 Source is used when disabled through SQLConf (335 milliseconds)
[info] KafkaRelationSuiteWithAdminV1:
[info] - explicit earliest to latest offsets (4 seconds, 438 milliseconds)
[info] - default starting and ending offsets (2 seconds, 333 milliseconds)
[info] - explicit offsets (6 seconds, 459 milliseconds)
[info] - default starting and ending offsets with headers (2 seconds, 256 milliseconds)
[info] - timestamp provided for starting and ending (2 seconds, 357 milliseconds)
[info] - timestamp provided for starting, offset provided for ending (2 seconds, 340 milliseconds)
[info] - timestamp provided for ending, offset provided for starting (2 seconds, 239 milliseconds)
[info] - timestamp provided for starting, ending not provided (2 seconds, 294 milliseconds)
[info] - timestamp provided for ending, starting not provided (2 seconds, 344 milliseconds)
[info] - no matched offset for timestamp - startingOffsets (3 seconds, 191 milliseconds)
[info] - no matched offset for timestamp - endingOffsets (2 seconds, 214 milliseconds)
[info] - reuse same dataframe in query (1 second, 224 milliseconds)
[info] - test late binding start offsets (5 seconds, 237 milliseconds)
[info] - bad batch query options (11 milliseconds)
[info] - read Kafka transactional messages: read_committed (3 seconds, 798 milliseconds)
[info] - read Kafka transactional messages: read_uncommitted (4 seconds, 694 milliseconds)
[info] - SPARK-30656: minPartitions (3 seconds, 896 milliseconds)
[info] - V1 Source is used when set through SQLConf (11 milliseconds)
[info] JsonUtilsSuite:
[info] - parsing partitions (0 milliseconds)
[info] - parsing partitionOffsets (0 milliseconds)
[info] KafkaSourceOffsetSuite:
[info] - comparison {"t":{"0":1}} <=> {"t":{"0":2}} (0 milliseconds)
[info] - comparison {"t":{"1":0,"0":1}} <=> {"t":{"1":1,"0":2}} (1 millisecond)
[info] - comparison {"t":{"0":1},"T":{"0":0}} <=> {"t":{"0":2},"T":{"0":1}} (0 milliseconds)
[info] - comparison {"t":{"0":1}} <=> {"t":{"1":1,"0":2}} (1 millisecond)
[info] - comparison {"t":{"0":1}} <=> {"t":{"1":3,"0":2}} (0 milliseconds)
[info] - basic serialization - deserialization (1 millisecond)
[info] - OffsetSeqLog serialization - deserialization (142 milliseconds)
[info] - read Spark 2.1.0 offset format (2 milliseconds)
[info] KafkaOffsetReaderSuite:
[info] - isolationLevel must give back default isolation level when not set (1 millisecond)
[info] - isolationLevel must give back READ_UNCOMMITTED when set (0 milliseconds)
[info] - isolationLevel must give back READ_COMMITTED when set (0 milliseconds)
[info] - isolationLevel must throw exception when invalid isolation level set (1 millisecond)
[info] - SPARK-30656: getOffsetRangesFromUnresolvedOffsets - using specific offsets with useDeprecatedOffsetFetching true (174 milliseconds)
[info] - SPARK-30656: getOffsetRangesFromUnresolvedOffsets - using specific offsets with useDeprecatedOffsetFetching false (69 milliseconds)
[info] - SPARK-30656: getOffsetRangesFromUnresolvedOffsets - using special offsets with useDeprecatedOffsetFetching true (76 milliseconds)
[info] - SPARK-30656: getOffsetRangesFromUnresolvedOffsets - using special offsets with useDeprecatedOffsetFetching false (51 milliseconds)
[info] - SPARK-30656: getOffsetRangesFromUnresolvedOffsets - multiple topic partitions with useDeprecatedOffsetFetching true (177 milliseconds)
[info] - SPARK-30656: getOffsetRangesFromUnresolvedOffsets - multiple topic partitions with useDeprecatedOffsetFetching false (175 milliseconds)
[info] - SPARK-30656: getOffsetRangesFromResolvedOffsets with useDeprecatedOffsetFetching true (162 milliseconds)
[info] - SPARK-30656: getOffsetRangesFromResolvedOffsets with useDeprecatedOffsetFetching false (178 milliseconds)
[info] KafkaSinkBatchSuiteV2:
[info] - batch - write to kafka (1 second, 513 milliseconds)
[info] - batch - partition column and partitioner priorities (3 seconds, 990 milliseconds)
[info] - batch - null topic field value, and no topic option (51 milliseconds)
[info] - SPARK-20496: batch - enforce analyzed plans (229 milliseconds)
[info] - batch - unsupported save modes (39 milliseconds)
[info] - generic - write big data with small producer buffer (1 second, 996 milliseconds)
[info] KafkaSourceStressSuite:
[info] - stress test with multiple topics and partitions (47 seconds, 731 milliseconds)
[info] KafkaDontFailOnDataLossSuite:
[info] - failOnDataLoss=false should not return duplicated records: microbatch v1 (31 seconds, 60 milliseconds)
[info] - failOnDataLoss=false should not return duplicated records: microbatch v2 (1 second, 268 milliseconds)
[info] - failOnDataLoss=false should not return duplicated records: continuous processing (1 second, 95 milliseconds)
[info] - failOnDataLoss=false should not return duplicated records: batch v1 (1 second, 905 milliseconds)
[info] - failOnDataLoss=false should not return duplicated records: batch v2 (1 second, 36 milliseconds)
[info] KafkaSinkBatchSuiteV1:
[info] - batch - write to kafka (1 second, 472 milliseconds)
[info] - batch - partition column and partitioner priorities (3 seconds, 859 milliseconds)
[info] - batch - null topic field value, and no topic option (36 milliseconds)
[info] - SPARK-20496: batch - enforce analyzed plans (76 milliseconds)
[info] - batch - unsupported save modes (52 milliseconds)
[info] KafkaSparkConfSuite:
[info] - deprecated configs (1 millisecond)
[info] KafkaContinuousSourceSuite:
[info] - cannot stop Kafka stream (500 milliseconds)
[info] - assign from latest offsets (failOnDataLoss: true) (8 seconds, 868 milliseconds)
[info] - assign from earliest offsets (failOnDataLoss: true) (5 seconds, 932 milliseconds)
[info] - assign from specific offsets (failOnDataLoss: true) (2 seconds, 982 milliseconds)
[info] - assign from specific timestamps (failOnDataLoss: true) (5 seconds, 26 milliseconds)
[info] - subscribing topic by name from latest offsets (failOnDataLoss: true) (10 seconds, 995 milliseconds)
[info] - subscribing topic by name from earliest offsets (failOnDataLoss: true) (7 seconds, 899 milliseconds)
[info] - subscribing topic by name from specific offsets (failOnDataLoss: true) (3 seconds, 124 milliseconds)
[info] - subscribing topic by name from specific timestamps (failOnDataLoss: true) (7 seconds, 50 milliseconds)
[info] - subscribing topic by pattern from latest offsets (failOnDataLoss: true) (9 seconds, 990 milliseconds)
[info] - subscribing topic by pattern from earliest offsets (failOnDataLoss: true) (8 seconds, 990 milliseconds)
[info] - subscribing topic by pattern from specific offsets (failOnDataLoss: true) (2 seconds, 945 milliseconds)
[info] - subscribing topic by pattern from specific timestamps (failOnDataLoss: true) (7 seconds, 18 milliseconds)
[info] - assign from latest offsets (failOnDataLoss: false) (8 seconds, 964 milliseconds)
[info] - assign from earliest offsets (failOnDataLoss: false) (7 seconds, 73 milliseconds)
[info] - assign from specific offsets (failOnDataLoss: false) (3 seconds, 966 milliseconds)
[info] - assign from specific timestamps (failOnDataLoss: false) (6 seconds, 16 milliseconds)
[info] - subscribing topic by name from latest offsets (failOnDataLoss: false) (10 seconds, 981 milliseconds)
[info] - subscribing topic by name from earliest offsets (failOnDataLoss: false) (7 seconds, 958 milliseconds)
[info] - subscribing topic by name from specific offsets (failOnDataLoss: false) (3 seconds, 80 milliseconds)
[info] - subscribing topic by name from specific timestamps (failOnDataLoss: false) (6 seconds, 934 milliseconds)
[info] - subscribing topic by pattern from latest offsets (failOnDataLoss: false) (11 seconds, 34 milliseconds)
[info] - subscribing topic by pattern from earliest offsets (failOnDataLoss: false) (8 seconds, 11 milliseconds)
[info] - subscribing topic by pattern from specific offsets (failOnDataLoss: false) (3 seconds, 993 milliseconds)
[info] - subscribing topic by pattern from specific timestamps (failOnDataLoss: false) (6 seconds, 994 milliseconds)
[info] - bad source options (17 milliseconds)
[info] - unsupported kafka configs (10 milliseconds)
[info] - get offsets from case insensitive parameters (1 millisecond)
[info] - Kafka column types (1 second, 939 milliseconds)
[info] - ensure continuous stream is being used (157 milliseconds)
[info] - read Kafka transactional messages: read_committed (2 seconds, 56 milliseconds)
[info] - read Kafka transactional messages: read_uncommitted (2 seconds, 199 milliseconds)
[info] - SPARK-27494: read kafka record containing null key/values. (1 second, 198 milliseconds)
[info] KafkaSourceStressForDontFailOnDataLossSuite:
[info] - stress test for failOnDataLoss=false (20 seconds, 254 milliseconds)
[info] KafkaContinuousSourceTopicDeletionSuite:
[info] - ensure continuous stream is being used (121 milliseconds)
[info] - subscribing topic by pattern with topic deletions (5 seconds, 757 milliseconds)
[info] KafkaMicroBatchV2SourceWithAdminSuite:
[info] - cannot stop Kafka stream (414 milliseconds)
[info] - assign from latest offsets (failOnDataLoss: true) (5 seconds, 56 milliseconds)
[info] - assign from earliest offsets (failOnDataLoss: true) (4 seconds, 704 milliseconds)
[info] - assign from specific offsets (failOnDataLoss: true) (3 seconds, 604 milliseconds)
[info] - assign from specific timestamps (failOnDataLoss: true) (4 seconds, 746 milliseconds)
[info] - subscribing topic by name from latest offsets (failOnDataLoss: true) (4 seconds, 919 milliseconds)
[info] - subscribing topic by name from earliest offsets (failOnDataLoss: true) (6 seconds, 43 milliseconds)
[info] - subscribing topic by name from specific offsets (failOnDataLoss: true) (2 seconds, 992 milliseconds)
[info] - subscribing topic by name from specific timestamps (failOnDataLoss: true) (5 seconds, 233 milliseconds)
[info] - subscribing topic by pattern from latest offsets (failOnDataLoss: true) (6 seconds, 203 milliseconds)
[info] - subscribing topic by pattern from earliest offsets (failOnDataLoss: true) (6 seconds, 33 milliseconds)
[info] - subscribing topic by pattern from specific offsets (failOnDataLoss: true) (2 seconds, 958 milliseconds)
[info] - subscribing topic by pattern from specific timestamps (failOnDataLoss: true) (5 seconds, 183 milliseconds)
[info] - assign from latest offsets (failOnDataLoss: false) (6 seconds, 56 milliseconds)
[info] - assign from earliest offsets (failOnDataLoss: false) (5 seconds, 918 milliseconds)
[info] - assign from specific offsets (failOnDataLoss: false) (3 seconds, 113 milliseconds)
[info] - assign from specific timestamps (failOnDataLoss: false) (5 seconds, 551 milliseconds)
[info] - subscribing topic by name from latest offsets (failOnDataLoss: false) (7 seconds, 170 milliseconds)
[info] - subscribing topic by name from earliest offsets (failOnDataLoss: false) (6 seconds, 179 milliseconds)
[info] - subscribing topic by name from specific offsets (failOnDataLoss: false) (3 seconds, 248 milliseconds)
[info] - subscribing topic by name from specific timestamps (failOnDataLoss: false) (5 seconds, 927 milliseconds)
[info] - subscribing topic by pattern from latest offsets (failOnDataLoss: false) (6 seconds, 980 milliseconds)
[info] - subscribing topic by pattern from earliest offsets (failOnDataLoss: false) (5 seconds, 860 milliseconds)
[info] - subscribing topic by pattern from specific offsets (failOnDataLoss: false) (3 seconds, 806 milliseconds)
[info] - subscribing topic by pattern from specific timestamps (failOnDataLoss: false) (5 seconds, 608 milliseconds)
[info] - bad source options (13 milliseconds)
[info] - unsupported kafka configs (9 milliseconds)
[info] - get offsets from case insensitive parameters (0 milliseconds)
[info] - Kafka column types (1 second, 233 milliseconds)
[info] - (de)serialization of initial offsets (521 milliseconds)
[info] - SPARK-26718 Rate limit set to Long.Max should not overflow integer during end offset calculation (1 second, 589 milliseconds)
[info] - maxOffsetsPerTrigger (6 seconds, 31 milliseconds)
[info] - input row metrics (1 second, 393 milliseconds)
[info] - subscribing topic by pattern with topic deletions (3 seconds, 992 milliseconds)
[info] - subscribe topic by pattern with topic recreation between batches (3 seconds, 407 milliseconds)
[info] - ensure that initial offset are written with an extra byte in the beginning (SPARK-19517) (499 milliseconds)
[info] - deserialization of initial offset written by Spark 2.1.0 (SPARK-19517) (2 seconds, 379 milliseconds)
[info] - deserialization of initial offset written by future version (239 milliseconds)
[info] - KafkaSource with watermark (1 second, 986 milliseconds)
[info] - delete a topic when a Spark job is running (3 seconds, 888 milliseconds)
[info] - SPARK-22956: currentPartitionOffsets should be set when no new data comes in (2 seconds, 889 milliseconds)
[info] - allow group.id prefix (0 milliseconds)
[info] - allow group.id override (0 milliseconds)
[info] - ensure stream-stream self-join generates only one offset in log and correct metrics (16 seconds, 313 milliseconds)
[info] - read Kafka transactional messages: read_committed (14 seconds, 311 milliseconds)
[info] - read Kafka transactional messages: read_uncommitted (9 seconds, 892 milliseconds)
[info] - SPARK-25495: FetchedData.reset should reset all fields (2 seconds, 180 milliseconds)
[info] - SPARK-27494: read kafka record containing null key/values. (1 second, 67 milliseconds)
[info] - SPARK-30656: minPartitions (2 seconds, 147 milliseconds)
[info] - V2 Source is used by default (405 milliseconds)
[info] - minPartitions is supported (151 milliseconds)
[info] - default config of includeHeader doesn't break existing query from Spark 2.4 (2 seconds, 568 milliseconds)
[info] InternalKafkaProducerPoolSuite:
[info] - Should return same cached instance on calling acquire with same params. (4 milliseconds)
[info] - Should return different cached instances on calling acquire with different params. (6 milliseconds)
[info] - expire instances (2 milliseconds)
[info] - reference counting with concurrent access (318 milliseconds)
[info] ScalaTest
[info] Run completed in 1 hour, 51 minutes, 9 seconds.
[info] Total number of tests run: 411
[info] Suites: completed 30, aborted 0
[info] Tests: succeeded 411, failed 0, canceled 0, ignored 0, pending 0
[info] All tests passed.
[info] Passed: Total 411, Failed 0, Errors 0, Passed 411
[error] (streaming / Test / test) sbt.TestsFailedException: Tests unsuccessful
[error] Total time: 6697 s (01:51:37), completed Sep 13, 2021 7:56:39 PM
[error] running /home/jenkins/workspace/spark-branch-3.1-test-sbt-hadoop-2.7/build/sbt -Phadoop-2.7 -Phive-2.3 -Phive -Pkinesis-asl -Phive-thriftserver -Phadoop-cloud -Pyarn -Pspark-ganglia-lgpl -Pkubernetes -Pmesos test ; received return code 1
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
[Checks API] No suitable checks publisher found.
Finished: FAILURE