(de)serialization of initial offsets | 1.8 sec | Passed |
0 (SPARK-19517) | 6.8 sec | Passed |
4 | 13 sec | Regression |
Kafka column types | 4.9 sec | Passed |
KafkaSource with watermark | 13 sec | Passed |
Max should not overflow integer during end offset calculation | 4 sec | Passed |
SPARK-22956: currentPartitionOffsets should be set when no new data comes in | 57 sec | Passed |
SPARK-27494: read kafka record containing null key/values | 1 min 41 sec | Passed |
SPARK-30656: minPartitions | 29 sec | Passed |
V2 Source is used by default | 11 sec | Passed |
assign from earliest offsets (failOnDataLoss: false) | 9.2 sec | Passed |
assign from earliest offsets (failOnDataLoss: true) | 7.9 sec | Passed |
assign from latest offsets (failOnDataLoss: false) | 11 sec | Passed |
assign from latest offsets (failOnDataLoss: true) | 14 sec | Passed |
assign from specific offsets (failOnDataLoss: false) | 13 sec | Passed |
assign from specific offsets (failOnDataLoss: true) | 4.6 sec | Passed |
assign from specific timestamps (failOnDataLoss: false) | 11 sec | Passed |
assign from specific timestamps (failOnDataLoss: true) | 7.4 sec | Passed |
bad source options | 18 ms | Passed |
cannot stop Kafka stream | 12 sec | Passed |
delete a topic when a Spark job is running | 24 sec | Passed |
deserialization of initial offset written by future version | 4.2 sec | Passed |
ensure stream-stream self-join generates only one offset in log and correct metrics | 54 sec | Passed |
ensure that initial offset are written with an extra byte in the beginning (SPARK-19517) | 43 sec | Passed |
get offsets from case insensitive parameters | 0 ms | Passed |
id override | 7 sec | Passed |
id prefix | 17 sec | Passed |
input row metrics | 6.4 sec | Passed |
maxOffsetsPerTrigger | 41 sec | Passed |
minPartitions is supported | 5.8 sec | Passed |
read Kafka transactional messages: read_committed | 1 min 49 sec | Passed |
read Kafka transactional messages: read_uncommitted | 1 min 0 sec | Regression |
reset should reset all fields | 1 min 0 sec | Regression |
subscribe topic by pattern with topic recreation between batches | 1 min 0 sec | Regression |
subscribing topic by name from earliest offsets (failOnDataLoss: false) | 17 sec | Passed |
subscribing topic by name from earliest offsets (failOnDataLoss: true) | 10 sec | Passed |
subscribing topic by name from latest offsets (failOnDataLoss: false) | 18 sec | Passed |
subscribing topic by name from latest offsets (failOnDataLoss: true) | 17 sec | Passed |
subscribing topic by name from specific offsets (failOnDataLoss: false) | 10 sec | Passed |
subscribing topic by name from specific offsets (failOnDataLoss: true) | 8.2 sec | Passed |
subscribing topic by name from specific timestamps (failOnDataLoss: false) | 18 sec | Passed |
subscribing topic by name from specific timestamps (failOnDataLoss: true) | 14 sec | Passed |
subscribing topic by pattern from earliest offsets (failOnDataLoss: false) | 18 sec | Passed |
subscribing topic by pattern from earliest offsets (failOnDataLoss: true) | 21 sec | Passed |
subscribing topic by pattern from latest offsets (failOnDataLoss: false) | 17 sec | Passed |
subscribing topic by pattern from latest offsets (failOnDataLoss: true) | 10 sec | Passed |
subscribing topic by pattern from specific offsets (failOnDataLoss: false) | 10 sec | Passed |
subscribing topic by pattern from specific offsets (failOnDataLoss: true) | 7.5 sec | Passed |
subscribing topic by pattern from specific timestamps (failOnDataLoss: false) | 19 sec | Passed |
subscribing topic by pattern from specific timestamps (failOnDataLoss: true) | 27 sec | Passed |
subscribing topic by pattern with topic deletions | 1 min 24 sec | Regression |
unsupported kafka configs | 35 ms | Passed |