Test Result : KafkaMicroBatchV2SourceSuite

0 failures (±0) , 1 skipped (±0)
52 tests (±0)
Took 4 min 0 sec.

All Tests

Test nameDurationStatus
(de)serialization of initial offsets1 secPassed
0 (SPARK-19517)1.9 secPassed
43.8 secPassed
Kafka column types1.6 secPassed
KafkaSource with watermark1.6 secPassed
Max should not overflow integer during end offset calculation2.6 secPassed
SPARK-22956: currentPartitionOffsets should be set when no new data comes in8.2 secPassed
SPARK-27494: read kafka record containing null key/values1.7 secPassed
SPARK-30656: minPartitions2.2 secPassed
V2 Source is used by default1.1 secPassed
assign from earliest offsets (failOnDataLoss: false)3.2 secPassed
assign from earliest offsets (failOnDataLoss: true)3.1 secPassed
assign from latest offsets (failOnDataLoss: false)4.4 secPassed
assign from latest offsets (failOnDataLoss: true)4 secPassed
assign from specific offsets (failOnDataLoss: false)2.7 secPassed
assign from specific offsets (failOnDataLoss: true)2.4 secPassed
assign from specific timestamps (failOnDataLoss: false)3.5 secPassed
assign from specific timestamps (failOnDataLoss: true)6 secPassed
bad source options25 msPassed
cannot stop Kafka stream1.5 secPassed
delete a topic when a Spark job is running22 secPassed
deserialization of initial offset written by future version0.33 secPassed
ensure stream-stream self-join generates only one offset in log and correct metrics20 secPassed
ensure that initial offset are written with an extra byte in the beginning (SPARK-19517)0.88 secPassed
get offsets from case insensitive parameters1 msPassed
id override3.1 secPassed
id prefix2.6 secPassed
input row metrics1.7 secPassed
maxOffsetsPerTrigger10 secPassed
minPartitions is supported0.32 secPassed
read Kafka transactional messages: read_committed19 secPassed
read Kafka transactional messages: read_uncommitted14 secPassed
reset should reset all fields2.6 secPassed
subscribe topic by pattern with topic recreation between batches10 secPassed
subscribing topic by name from earliest offsets (failOnDataLoss: false)4.4 secPassed
subscribing topic by name from earliest offsets (failOnDataLoss: true)3.8 secPassed
subscribing topic by name from latest offsets (failOnDataLoss: false)6.5 secPassed
subscribing topic by name from latest offsets (failOnDataLoss: true)4.3 secPassed
subscribing topic by name from specific offsets (failOnDataLoss: false)3.1 secPassed
subscribing topic by name from specific offsets (failOnDataLoss: true)2.8 secPassed
subscribing topic by name from specific timestamps (failOnDataLoss: false)4.5 secPassed
subscribing topic by name from specific timestamps (failOnDataLoss: true)3.7 secPassed
subscribing topic by pattern from earliest offsets (failOnDataLoss: false)5.5 secPassed
subscribing topic by pattern from earliest offsets (failOnDataLoss: true)4.3 secPassed
subscribing topic by pattern from latest offsets (failOnDataLoss: false)8 secPassed
subscribing topic by pattern from latest offsets (failOnDataLoss: true)5.3 secPassed
subscribing topic by pattern from specific offsets (failOnDataLoss: false)3 secPassed
subscribing topic by pattern from specific offsets (failOnDataLoss: true)2.5 secPassed
subscribing topic by pattern from specific timestamps (failOnDataLoss: false)5.1 secPassed
subscribing topic by pattern from specific timestamps (failOnDataLoss: true)6 secPassed
subscribing topic by pattern with topic deletions-1 msSkipped
unsupported kafka configs32 msPassed