(de)serialization of initial offsets | 2 sec | Passed |
0 (SPARK-19517) | 2.6 sec | Passed |
4 | 4.9 sec | Passed |
Kafka column types | 1.7 sec | Passed |
KafkaSource with watermark | 5.9 sec | Passed |
Max should not overflow integer during end offset calculation | 2.6 sec | Passed |
SPARK-22956: currentPartitionOffsets should be set when no new data comes in | 13 sec | Passed |
SPARK-27494: read kafka record containing null key/values | 3.4 sec | Passed |
SPARK-30656: minPartitions | 9.1 sec | Passed |
V2 Source is used by default | 3.6 sec | Passed |
assign from earliest offsets (failOnDataLoss: false) | 9.1 sec | Passed |
assign from earliest offsets (failOnDataLoss: true) | 7 sec | Passed |
assign from latest offsets (failOnDataLoss: false) | 6.3 sec | Passed |
assign from latest offsets (failOnDataLoss: true) | 15 sec | Passed |
assign from specific offsets (failOnDataLoss: false) | 3.8 sec | Passed |
assign from specific offsets (failOnDataLoss: true) | 7.2 sec | Passed |
assign from specific timestamps (failOnDataLoss: false) | 7.8 sec | Passed |
assign from specific timestamps (failOnDataLoss: true) | 5 sec | Passed |
bad source options | 13 ms | Passed |
cannot stop Kafka stream | 6.3 sec | Passed |
delete a topic when a Spark job is running | 17 sec | Passed |
deserialization of initial offset written by future version | 0.63 sec | Passed |
ensure stream-stream self-join generates only one offset in log and correct metrics | 32 sec | Passed |
ensure that initial offset are written with an extra byte in the beginning (SPARK-19517) | 1.5 sec | Passed |
get offsets from case insensitive parameters | 1 ms | Passed |
id override | 3.7 sec | Passed |
id prefix | 7.8 sec | Passed |
input row metrics | 9 sec | Passed |
maxOffsetsPerTrigger | 20 sec | Passed |
minPartitions is supported | 0.41 sec | Passed |
read Kafka transactional messages: read_committed | 28 sec | Passed |
read Kafka transactional messages: read_uncommitted | 15 sec | Passed |
reset should reset all fields | 4.8 sec | Passed |
subscribe topic by pattern with topic recreation between batches | 17 sec | Passed |
subscribing topic by name from earliest offsets (failOnDataLoss: false) | 7 sec | Passed |
subscribing topic by name from earliest offsets (failOnDataLoss: true) | 9 sec | Passed |
subscribing topic by name from latest offsets (failOnDataLoss: false) | 7.8 sec | Passed |
subscribing topic by name from latest offsets (failOnDataLoss: true) | 10 sec | Passed |
subscribing topic by name from specific offsets (failOnDataLoss: false) | 7.6 sec | Passed |
subscribing topic by name from specific offsets (failOnDataLoss: true) | 5.1 sec | Passed |
subscribing topic by name from specific timestamps (failOnDataLoss: false) | 7.7 sec | Passed |
subscribing topic by name from specific timestamps (failOnDataLoss: true) | 8.7 sec | Passed |
subscribing topic by pattern from earliest offsets (failOnDataLoss: false) | 6.7 sec | Passed |
subscribing topic by pattern from earliest offsets (failOnDataLoss: true) | 9.8 sec | Passed |
subscribing topic by pattern from latest offsets (failOnDataLoss: false) | 7.6 sec | Passed |
subscribing topic by pattern from latest offsets (failOnDataLoss: true) | 6.8 sec | Passed |
subscribing topic by pattern from specific offsets (failOnDataLoss: false) | 6.3 sec | Passed |
subscribing topic by pattern from specific offsets (failOnDataLoss: true) | 6.4 sec | Passed |
subscribing topic by pattern from specific timestamps (failOnDataLoss: false) | 9.3 sec | Passed |
subscribing topic by pattern from specific timestamps (failOnDataLoss: true) | 6.6 sec | Passed |
subscribing topic by pattern with topic deletions | -1 ms | Skipped |
unsupported kafka configs | 19 ms | Passed |