Test Result : DataFrameReaderWriterSuite

0 failures (±0)
55 tests (±0)
Took 29 sec.

All Tests

Test nameDurationStatus
Create table as select command should output correct schema: basic0.44 secPassed
Create table as select command should output correct schema: complex0.6 secPassed
Insert overwrite table command should output correct schema: basic0.47 secPassed
Insert overwrite table command should output correct schema: complex0.67 secPassed
SPARK-16848: table API throws an exception for user specified schema2 msPassed
SPARK-17230: write out results of decimal calculation0.32 secPassed
SPARK-18510: use user specified types for partition columns in file sources0.41 secPassed
SPARK-18899: append to a bucketed table using DataFrameWriter with mismatched bucketing0.19 secPassed
SPARK-18912: number of columns mismatch for non-file-based data source table18 msPassed
SPARK-18913: append to a table with special column names0.36 secPassed
SPARK-20431: Specify a schema by using a DDL-formatted string0.43 secPassed
SPARK-20460 Check name duplication in buckets22 msPassed
SPARK-20460 Check name duplication in schema3.6 secPassed
SPARK-26164: Allow concurrent writers for multiple partitions and buckets10 secPassed
SPARK-29537: throw exception when user defined a wrong base path0.18 secPassed
SPARK-32364: later option should override earlier options for load()6 msPassed
SPARK-32364: later option should override earlier options for save()12 msPassed
SPARK-32516: 'path' option cannot coexist with save()'s path parameter7 msPassed
SPARK-32516: 'path' or 'paths' option cannot coexist with load()'s path parameters3 msPassed
SPARK-32516: legacy path option behavior in load()0.69 secPassed
SPARK-32844: DataFrameReader.table take the specified options for V1 relation18 msPassed
SPARK-32853: consecutive load/save calls should be allowed13 msPassed
Throw exception on unsafe cast with ANSI casting policy46 msPassed
Throw exception on unsafe table insertion with strict casting policy0.39 secPassed
check jdbc() does not support partitioning, bucketBy or sortBy9 msPassed
column nullability and comment - write and then read0.78 secPassed
csv - API and behavior regarding schema1.3 secPassed
json - API and behavior regarding schema1.1 secPassed
load API13 msPassed
options6 msPassed
orc - API and behavior regarding schema0.92 secPassed
parquet - API and behavior regarding schema1.4 secPassed
parquet - column nullability -- write only0.13 secPassed
pass partitionBy as options12 msPassed
prevent all column partitioning11 msPassed
read a data source that does not extend RelationProvider74 msPassed
read a data source that does not extend SchemaRelationProvider96 msPassed
resolve default source7 msPassed
resolve default source without extending SchemaRelationProvider5 msPassed
resolve full class6 msPassed
save mode21 msPassed
save mode for data source v274 msPassed
saveAsTable with mode Append should not fail if the table already exists and a same-name temp view exist0.31 secPassed
saveAsTable with mode Append should not fail if the table not exists but a same-name temp view exist0.15 secPassed
saveAsTable with mode ErrorIfExists should not fail if the table not exists but a same-name temp view exist0.14 secPassed
saveAsTable with mode Ignore should create the table if the table not exists but a same-name temp view exist0.13 secPassed
saveAsTable with mode Overwrite should not drop the temp view if the table not exists but a same-name temp view exist0.12 secPassed
saveAsTable with mode Overwrite should not fail if the table already exists and a same-name temp view exist0.31 secPassed
test different data types for options7 msPassed
test path option in load6 msPassed
text - API and behavior regarding schema0.89 secPassed
textFile - API and behavior regarding schema0.5 secPassed
use Spark jobs to list files0.53 secPassed
write path implements onTaskCommit API correctly0.26 secPassed
writeStream cannot be called on non-streaming datasets11 msPassed