(de)serialization of initial offsets | 0.66 sec | Passed |
0 (SPARK-19517) | 1.2 sec | Passed |
4 | 2.1 sec | Passed |
Kafka column types | 1 sec | Passed |
KafkaSource with watermark | 1.2 sec | Passed |
Max should not overflow integer during end offset calculation | 1.1 sec | Passed |
SPARK-22956: currentPartitionOffsets should be set when no new data comes in | 6.6 sec | Passed |
SPARK-27494: read kafka record containing null key/values | 1.5 sec | Passed |
SPARK-30656: minPartitions | 2.9 sec | Passed |
V2 Source is used by default | 0.64 sec | Passed |
assign from earliest offsets (failOnDataLoss: false) | 2.1 sec | Passed |
assign from earliest offsets (failOnDataLoss: true) | 2.3 sec | Passed |
assign from latest offsets (failOnDataLoss: false) | 2.8 sec | Passed |
assign from latest offsets (failOnDataLoss: true) | 2.5 sec | Passed |
assign from specific offsets (failOnDataLoss: false) | 1.7 sec | Passed |
assign from specific offsets (failOnDataLoss: true) | 1.7 sec | Passed |
assign from specific timestamps (failOnDataLoss: false) | 2.5 sec | Passed |
assign from specific timestamps (failOnDataLoss: true) | 2.3 sec | Passed |
bad source options | 11 ms | Passed |
cannot stop Kafka stream | 1.3 sec | Passed |
delete a topic when a Spark job is running | 11 sec | Passed |
deserialization of initial offset written by future version | 0.31 sec | Passed |
ensure stream-stream self-join generates only one offset in log and correct metrics | 11 sec | Passed |
ensure that initial offset are written with an extra byte in the beginning (SPARK-19517) | 0.44 sec | Passed |
get offsets from case insensitive parameters | 2 ms | Passed |
id override | 2.1 sec | Passed |
id prefix | 1.9 sec | Passed |
input row metrics | 1.5 sec | Passed |
maxOffsetsPerTrigger | 7.2 sec | Passed |
minPartitions is supported | 0.25 sec | Passed |
read Kafka transactional messages: read_committed | 18 sec | Passed |
read Kafka transactional messages: read_uncommitted | 10 sec | Passed |
reset should reset all fields | 2.2 sec | Passed |
subscribe topic by pattern with topic recreation between batches | 9.5 sec | Passed |
subscribing topic by name from earliest offsets (failOnDataLoss: false) | 2.8 sec | Passed |
subscribing topic by name from earliest offsets (failOnDataLoss: true) | 3 sec | Passed |
subscribing topic by name from latest offsets (failOnDataLoss: false) | 3.3 sec | Passed |
subscribing topic by name from latest offsets (failOnDataLoss: true) | 3 sec | Passed |
subscribing topic by name from specific offsets (failOnDataLoss: false) | 2.1 sec | Passed |
subscribing topic by name from specific offsets (failOnDataLoss: true) | 2.4 sec | Passed |
subscribing topic by name from specific timestamps (failOnDataLoss: false) | 2.7 sec | Passed |
subscribing topic by name from specific timestamps (failOnDataLoss: true) | 4.8 sec | Passed |
subscribing topic by pattern from earliest offsets (failOnDataLoss: false) | 2.8 sec | Passed |
subscribing topic by pattern from earliest offsets (failOnDataLoss: true) | 2.7 sec | Passed |
subscribing topic by pattern from latest offsets (failOnDataLoss: false) | 3.3 sec | Passed |
subscribing topic by pattern from latest offsets (failOnDataLoss: true) | 2.9 sec | Passed |
subscribing topic by pattern from specific offsets (failOnDataLoss: false) | 3.9 sec | Passed |
subscribing topic by pattern from specific offsets (failOnDataLoss: true) | 2.9 sec | Passed |
subscribing topic by pattern from specific timestamps (failOnDataLoss: false) | 3.1 sec | Passed |
subscribing topic by pattern from specific timestamps (failOnDataLoss: true) | 4.4 sec | Passed |
subscribing topic by pattern with topic deletions | -1 ms | Skipped |
unsupported kafka configs | 12 ms | Passed |