Test Result : KafkaMicroBatchV2SourceSuite

6 failures (+6)
52 tests (±0)
Took 21 min.

All Tests

Test nameDurationStatus
(de)serialization of initial offsets3.4 secPassed
0 (SPARK-19517)8.1 secPassed
422 secPassed
Kafka column types7.2 secPassed
KafkaSource with watermark8 secPassed
Max should not overflow integer during end offset calculation33 secPassed
SPARK-22956: currentPartitionOffsets should be set when no new data comes in1 min 38 secRegression
SPARK-27494: read kafka record containing null key/values17 secPassed
SPARK-30656: minPartitions15 secPassed
V2 Source is used by default2.8 secPassed
assign from earliest offsets (failOnDataLoss: false)52 secRegression
assign from earliest offsets (failOnDataLoss: true)11 secPassed
assign from latest offsets (failOnDataLoss: false)17 secPassed
assign from latest offsets (failOnDataLoss: true)16 secPassed
assign from specific offsets (failOnDataLoss: false)36 secRegression
assign from specific offsets (failOnDataLoss: true)14 secPassed
assign from specific timestamps (failOnDataLoss: false)58 secPassed
assign from specific timestamps (failOnDataLoss: true)17 secPassed
bad source options43 msPassed
cannot stop Kafka stream6.9 secPassed
delete a topic when a Spark job is running1 min 40 secRegression
deserialization of initial offset written by future version3.1 secPassed
ensure stream-stream self-join generates only one offset in log and correct metrics1 min 1 secPassed
ensure that initial offset are written with an extra byte in the beginning (SPARK-19517)4 secPassed
get offsets from case insensitive parameters1 msPassed
id override1 min 0 secRegression
id prefix1 min 0 secRegression
input row metrics7.5 secPassed
maxOffsetsPerTrigger1 min 51 secPassed
minPartitions is supported1.4 secPassed
read Kafka transactional messages: read_committed1 min 15 secPassed
read Kafka transactional messages: read_uncommitted32 secPassed
reset should reset all fields8.7 secPassed
subscribe topic by pattern with topic recreation between batches14 secPassed
subscribing topic by name from earliest offsets (failOnDataLoss: false)24 secPassed
subscribing topic by name from earliest offsets (failOnDataLoss: true)22 secPassed
subscribing topic by name from latest offsets (failOnDataLoss: false)22 secPassed
subscribing topic by name from latest offsets (failOnDataLoss: true)12 secPassed
subscribing topic by name from specific offsets (failOnDataLoss: false)13 secPassed
subscribing topic by name from specific offsets (failOnDataLoss: true)9.2 secPassed
subscribing topic by name from specific timestamps (failOnDataLoss: false)19 secPassed
subscribing topic by name from specific timestamps (failOnDataLoss: true)20 secPassed
subscribing topic by pattern from earliest offsets (failOnDataLoss: false)30 secPassed
subscribing topic by pattern from earliest offsets (failOnDataLoss: true)17 secPassed
subscribing topic by pattern from latest offsets (failOnDataLoss: false)24 secPassed
subscribing topic by pattern from latest offsets (failOnDataLoss: true)14 secPassed
subscribing topic by pattern from specific offsets (failOnDataLoss: false)8.5 secPassed
subscribing topic by pattern from specific offsets (failOnDataLoss: true)18 secPassed
subscribing topic by pattern from specific timestamps (failOnDataLoss: false)24 secPassed
subscribing topic by pattern from specific timestamps (failOnDataLoss: true)20 secPassed
subscribing topic by pattern with topic deletions22 secPassed
unsupported kafka configs39 msPassed