Skip to content
Jenkins
log in
Jenkins
shankari
My Views
All
NewSparkPullRequestBuilder
#5037
Back to Project
Status
Changes
Console Output
View as plain text
View Build Information
Parameters
Environment Variables
Git Build Data
Test Result
Deflake this build
Embeddable Build Status
Previous Build
Next Build
Started 8 mo 13 days ago
Took
5 hr 7 min
on amp-jenkins-worker-04
Build #5037 (May 15, 2020 6:36:20 AM)
Build Artifacts
No changes.
Started by remote host 35.243.23.32
Revision
: 9c97eb2eb863f8d2d2c2718c6e48058e8850c851
refs/remotes/origin/pr/28526/merge
Test Result
(83 failures / +82)
org.apache.spark.scheduler.BarrierTaskContextSuite.support multiple barrier() call within a single task
org.apache.spark.sql.kafka010.KafkaContinuousSourceStressForDontFailOnDataLossSuite.stress test for failOnDataLoss=false
org.apache.spark.sql.kafka010.KafkaContinuousSourceSuite.cannot stop Kafka stream
org.apache.spark.sql.kafka010.KafkaContinuousSourceSuite.assign from latest offsets (failOnDataLoss: true)
org.apache.spark.sql.kafka010.KafkaContinuousSourceSuite.assign from earliest offsets (failOnDataLoss: true)
org.apache.spark.sql.kafka010.KafkaContinuousSourceSuite.assign from specific offsets (failOnDataLoss: true)
org.apache.spark.sql.kafka010.KafkaContinuousSourceSuite.assign from specific timestamps (failOnDataLoss: true)
org.apache.spark.sql.kafka010.KafkaContinuousSourceSuite.subscribing topic by name from latest offsets (failOnDataLoss: true)
org.apache.spark.sql.kafka010.KafkaContinuousSourceSuite.subscribing topic by name from earliest offsets (failOnDataLoss: true)
org.apache.spark.sql.kafka010.KafkaContinuousSourceSuite.subscribing topic by name from specific offsets (failOnDataLoss: true)
org.apache.spark.sql.kafka010.KafkaContinuousSourceSuite.subscribing topic by name from specific timestamps (failOnDataLoss: true)
org.apache.spark.sql.kafka010.KafkaContinuousSourceSuite.subscribing topic by pattern from latest offsets (failOnDataLoss: true)
org.apache.spark.sql.kafka010.KafkaContinuousSourceSuite.subscribing topic by pattern from earliest offsets (failOnDataLoss: true)
org.apache.spark.sql.kafka010.KafkaContinuousSourceSuite.subscribing topic by pattern from specific offsets (failOnDataLoss: true)
org.apache.spark.sql.kafka010.KafkaContinuousSourceSuite.subscribing topic by pattern from specific timestamps (failOnDataLoss: true)
org.apache.spark.sql.kafka010.KafkaContinuousSourceSuite.assign from latest offsets (failOnDataLoss: false)
org.apache.spark.sql.kafka010.KafkaContinuousSourceSuite.assign from earliest offsets (failOnDataLoss: false)
org.apache.spark.sql.kafka010.KafkaContinuousSourceSuite.assign from specific offsets (failOnDataLoss: false)
org.apache.spark.sql.kafka010.KafkaContinuousSourceSuite.assign from specific timestamps (failOnDataLoss: false)
org.apache.spark.sql.kafka010.KafkaContinuousSourceSuite.subscribing topic by name from latest offsets (failOnDataLoss: false)
org.apache.spark.sql.kafka010.KafkaContinuousSourceSuite.subscribing topic by name from earliest offsets (failOnDataLoss: false)
org.apache.spark.sql.kafka010.KafkaContinuousSourceSuite.subscribing topic by name from specific offsets (failOnDataLoss: false)
org.apache.spark.sql.kafka010.KafkaContinuousSourceSuite.subscribing topic by name from specific timestamps (failOnDataLoss: false)
org.apache.spark.sql.kafka010.KafkaContinuousSourceSuite.subscribing topic by pattern from latest offsets (failOnDataLoss: false)
org.apache.spark.sql.kafka010.KafkaContinuousSourceSuite.subscribing topic by pattern from earliest offsets (failOnDataLoss: false)
org.apache.spark.sql.kafka010.KafkaContinuousSourceSuite.subscribing topic by pattern from specific offsets (failOnDataLoss: false)
org.apache.spark.sql.kafka010.KafkaContinuousSourceSuite.subscribing topic by pattern from specific timestamps (failOnDataLoss: false)
org.apache.spark.sql.kafka010.KafkaContinuousSourceSuite.Kafka column types
org.apache.spark.sql.kafka010.KafkaContinuousSourceSuite.read Kafka transactional messages: read_committed
org.apache.spark.sql.kafka010.KafkaContinuousSourceSuite.read Kafka transactional messages: read_uncommitted
org.apache.spark.sql.kafka010.KafkaContinuousSourceSuite.SPARK-27494: read kafka record containing null key/values
org.apache.spark.sql.kafka010.KafkaContinuousSourceTopicDeletionSuite.subscribing topic by pattern with topic deletions
org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite.subscribing topic by pattern from specific timestamps (failOnDataLoss: true)
org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite.assign from earliest offsets (failOnDataLoss: false)
org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite.delete a topic when a Spark job is running
org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite.read Kafka transactional messages: read_committed
org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite.read Kafka transactional messages: read_uncommitted
org.apache.spark.sql.kafka010.KafkaMicroBatchV2SourceSuite.subscribing topic by pattern with topic deletions
org.apache.spark.sql.kafka010.KafkaMicroBatchV2SourceSuite.0 (SPARK-19517)
org.apache.spark.sql.kafka010.KafkaMicroBatchV2SourceSuite.KafkaSource with watermark
org.apache.spark.sql.kafka010.KafkaMicroBatchV2SourceSuite.delete a topic when a Spark job is running
org.apache.spark.sql.kafka010.KafkaMicroBatchV2SourceSuite.SPARK-22956: currentPartitionOffsets should be set when no new data comes in
org.apache.spark.sql.kafka010.KafkaMicroBatchV2SourceSuite.id prefix
org.apache.spark.sql.kafka010.KafkaMicroBatchV2SourceSuite.id override
org.apache.spark.sql.kafka010.KafkaMicroBatchV2SourceSuite.ensure stream-stream self-join generates only one offset in log and correct metrics
org.apache.spark.sql.kafka010.KafkaMicroBatchV2SourceSuite.read Kafka transactional messages: read_committed
org.apache.spark.sql.kafka010.KafkaMicroBatchV2SourceSuite.read Kafka transactional messages: read_uncommitted
org.apache.spark.sql.kafka010.KafkaMicroBatchV2SourceSuite.reset should reset all fields
org.apache.spark.sql.kafka010.KafkaMicroBatchV2SourceSuite.SPARK-27494: read kafka record containing null key/values
org.apache.spark.sql.kafka010.KafkaMicroBatchV2SourceSuite.SPARK-30656: minPartitions
org.apache.spark.sql.kafka010.KafkaMicroBatchV2SourceSuite.V2 Source is used by default
org.apache.spark.sql.kafka010.KafkaMicroBatchV2SourceSuite.minPartitions is supported
org.apache.spark.sql.kafka010.KafkaMicroBatchV2SourceSuite.4
org.apache.spark.sql.kafka010.KafkaRelationSuiteV2.reuse same dataframe in query
org.apache.spark.sql.kafka010.KafkaRelationSuiteV2.test late binding start offsets
org.apache.spark.sql.kafka010.KafkaSourceStressSuite.stress test with multiple topics and partitions
org.apache.spark.sql.kafka010.consumer.KafkaDataConsumerSuite.SPARK-25151 Handles multiple tasks in executor fetching same (topic, partition) pair and same offset (edge-case) - data not in use
org.apache.spark.streaming.kafka010.KafkaDataConsumerSuite.(It is not a test it is a sbt.testing.SuiteSelector)
org.apache.spark.sql.DataFrameTimeWindowingSuite.(It is not a test it is a sbt.testing.SuiteSelector)
org.apache.spark.sql.DatasetCacheSuite.(It is not a test it is a sbt.testing.SuiteSelector)
org.apache.spark.sql.JoinSuite.(It is not a test it is a sbt.testing.SuiteSelector)
org.apache.spark.sql.connector.V2CommandsCaseSensitivitySuite.(It is not a test it is a sbt.testing.SuiteSelector)
org.apache.spark.sql.execution.AggregatingAccumulatorSuite.(It is not a test it is a sbt.testing.SuiteSelector)
org.apache.spark.sql.execution.OptimizeMetadataOnlyQuerySuite.(It is not a test it is a sbt.testing.SuiteSelector)
org.apache.spark.sql.execution.SQLWindowFunctionSuite.(It is not a test it is a sbt.testing.SuiteSelector)
org.apache.spark.sql.execution.WholeStageCodegenSparkSubmitSuite.Generated code on driver should not embed platform-specific constant
org.apache.spark.sql.execution.adaptive.AdaptiveQueryExecSuite.(It is not a test it is a sbt.testing.SuiteSelector)
org.apache.spark.sql.execution.streaming.CompactibleFileStreamLogSuite.(It is not a test it is a sbt.testing.SuiteSelector)
org.apache.spark.sql.execution.ui.SQLAppStatusListenerMemoryLeakSuite.no memory leak
org.apache.spark.sql.execution.ui.SQLAppStatusListenerSuite.(It is not a test it is a sbt.testing.SuiteSelector)
org.apache.spark.sql.streaming.StreamingOuterJoinSuite.SPARK-26187 self right outer join should not return outer nulls for already matched rows
org.apache.spark.sql.streaming.StreamingOuterJoinSuite.4
org.apache.spark.sql.streaming.StreamingOuterJoinSuite.SPARK-29438: ensure UNION doesn't lead stream-stream join to use shifted partition IDs
org.apache.spark.sql.streaming.continuous.ContinuousStressSuite.restarts
org.apache.spark.sql.hive.HiveExternalCatalogVersionsSuite.(It is not a test it is a sbt.testing.SuiteSelector)
org.apache.spark.sql.hive.HiveSparkSubmitSuite.temporary Hive UDF: define a UDF and use it
org.apache.spark.sql.hive.HiveSparkSubmitSuite.permanent Hive UDF: define a UDF and use it
org.apache.spark.sql.hive.HiveSparkSubmitSuite.SPARK-11009 fix wrong result of Window function in cluster mode
org.apache.spark.sql.hive.HiveSparkSubmitSuite.SPARK-14244 fix window partition size attribute binding failure
org.apache.spark.sql.hive.HiveSparkSubmitSuite.dir
org.apache.spark.sql.hive.HiveSparkSubmitSuite.dir
org.apache.spark.sql.hive.HiveSparkSubmitSuite.ConnectionURL
org.apache.spark.sql.hive.HiveSparkSubmitSuite.SPARK-18989: DESC TABLE should not fail with format class not found
Show all failed tests >>>