(de)serialization of initial offsets | 18 sec | Passed |
0 (SPARK-19517) | 1 min 14 sec | Regression |
4 | 1 min 0 sec | Regression |
Kafka column types | 8.2 sec | Passed |
KafkaSource with watermark | 1 min 0 sec | Regression |
Max should not overflow integer during end offset calculation | 11 sec | Passed |
SPARK-22956: currentPartitionOffsets should be set when no new data comes in | 45 sec | Regression |
SPARK-27494: read kafka record containing null key/values | 1 min 0 sec | Regression |
SPARK-30656: minPartitions | 1 min 0 sec | Regression |
V2 Source is used by default | 1 min 0 sec | Regression |
assign from earliest offsets (failOnDataLoss: false) | 13 sec | Passed |
assign from earliest offsets (failOnDataLoss: true) | 11 sec | Passed |
assign from latest offsets (failOnDataLoss: false) | 23 sec | Passed |
assign from latest offsets (failOnDataLoss: true) | 24 sec | Passed |
assign from specific offsets (failOnDataLoss: false) | 18 sec | Passed |
assign from specific offsets (failOnDataLoss: true) | 8.2 sec | Passed |
assign from specific timestamps (failOnDataLoss: false) | 12 sec | Passed |
assign from specific timestamps (failOnDataLoss: true) | 21 sec | Passed |
bad source options | 31 ms | Passed |
cannot stop Kafka stream | 8.4 sec | Passed |
delete a topic when a Spark job is running | 1 min 0 sec | Regression |
deserialization of initial offset written by future version | 56 sec | Passed |
ensure stream-stream self-join generates only one offset in log and correct metrics | 1 min 1 sec | Regression |
ensure that initial offset are written with an extra byte in the beginning (SPARK-19517) | 12 sec | Passed |
get offsets from case insensitive parameters | 0 ms | Passed |
id override | 1 min 0 sec | Regression |
id prefix | 1 min 0 sec | Regression |
input row metrics | 48 sec | Passed |
maxOffsetsPerTrigger | 2 min 3 sec | Passed |
minPartitions is supported | 10 sec | Regression |
read Kafka transactional messages: read_committed | 1 min 0 sec | Regression |
read Kafka transactional messages: read_uncommitted | 1 min 24 sec | Regression |
reset should reset all fields | 1 min 0 sec | Regression |
subscribe topic by pattern with topic recreation between batches | 47 sec | Passed |
subscribing topic by name from earliest offsets (failOnDataLoss: false) | 36 sec | Passed |
subscribing topic by name from earliest offsets (failOnDataLoss: true) | 17 sec | Passed |
subscribing topic by name from latest offsets (failOnDataLoss: false) | 37 sec | Passed |
subscribing topic by name from latest offsets (failOnDataLoss: true) | 14 sec | Passed |
subscribing topic by name from specific offsets (failOnDataLoss: false) | 13 sec | Passed |
subscribing topic by name from specific offsets (failOnDataLoss: true) | 8.1 sec | Passed |
subscribing topic by name from specific timestamps (failOnDataLoss: false) | 35 sec | Passed |
subscribing topic by name from specific timestamps (failOnDataLoss: true) | 11 sec | Passed |
subscribing topic by pattern from earliest offsets (failOnDataLoss: false) | 1 min 9 sec | Passed |
subscribing topic by pattern from earliest offsets (failOnDataLoss: true) | 19 sec | Passed |
subscribing topic by pattern from latest offsets (failOnDataLoss: false) | 23 sec | Passed |
subscribing topic by pattern from latest offsets (failOnDataLoss: true) | 21 sec | Passed |
subscribing topic by pattern from specific offsets (failOnDataLoss: false) | 55 sec | Passed |
subscribing topic by pattern from specific offsets (failOnDataLoss: true) | 21 sec | Passed |
subscribing topic by pattern from specific timestamps (failOnDataLoss: false) | 1 min 1 sec | Passed |
subscribing topic by pattern from specific timestamps (failOnDataLoss: true) | 25 sec | Passed |
subscribing topic by pattern with topic deletions | 1 min 20 sec | Regression |
unsupported kafka configs | 28 ms | Passed |