Test Result : DataFrameReaderWriterSuite

0 failures
54 tests
Took 16 sec.

All Tests

Test nameDurationStatus
Create table as select command should output correct schema: basic0.32 secPassed
Create table as select command should output correct schema: complex0.46 secPassed
Insert overwrite table command should output correct schema: basic0.33 secPassed
Insert overwrite table command should output correct schema: complex0.61 secPassed
SPARK-16848: table API throws an exception for user specified schema4 msPassed
SPARK-17230: write out results of decimal calculation0.28 secPassed
SPARK-18510: use user specified types for partition columns in file sources0.51 secPassed
SPARK-18899: append to a bucketed table using DataFrameWriter with mismatched bucketing0.25 secPassed
SPARK-18912: number of columns mismatch for non-file-based data source table38 msPassed
SPARK-18913: append to a table with special column names0.28 secPassed
SPARK-20431: Specify a schema by using a DDL-formatted string0.35 secPassed
SPARK-20460 Check name duplication in buckets56 msPassed
SPARK-20460 Check name duplication in schema2.5 secPassed
SPARK-29537: throw exception when user defined a wrong base path0.21 secPassed
SPARK-32364: later option should override earlier options for load()12 msPassed
SPARK-32364: later option should override earlier options for save()22 msPassed
SPARK-32516: 'path' option cannot coexist with save()'s path parameter6 msPassed
SPARK-32516: 'path' or 'paths' option cannot coexist with load()'s path parameters5 msPassed
SPARK-32516: legacy path option behavior in load()0.54 secPassed
SPARK-32844: DataFrameReader.table take the specified options for V1 relation0.14 secPassed
SPARK-32853: consecutive load/save calls should be allowed21 msPassed
Throw exception on unsafe cast with ANSI casting policy0.11 secPassed
Throw exception on unsafe table insertion with strict casting policy0.43 secPassed
check jdbc() does not support partitioning, bucketBy or sortBy13 msPassed
column nullability and comment - write and then read0.65 secPassed
csv - API and behavior regarding schema1.4 secPassed
json - API and behavior regarding schema1 secPassed
load API25 msPassed
options14 msPassed
orc - API and behavior regarding schema0.84 secPassed
parquet - API and behavior regarding schema1.3 secPassed
parquet - column nullability -- write only0.13 secPassed
pass partitionBy as options25 msPassed
prevent all column partitioning16 msPassed
read a data source that does not extend RelationProvider0.11 secPassed
read a data source that does not extend SchemaRelationProvider0.23 secPassed
resolve default source16 msPassed
resolve default source without extending SchemaRelationProvider11 msPassed
resolve full class11 msPassed
save mode39 msPassed
save mode for data source v263 msPassed
saveAsTable with mode Append should not fail if the table already exists and a same-name temp view exist0.27 secPassed
saveAsTable with mode Append should not fail if the table not exists but a same-name temp view exist0.11 secPassed
saveAsTable with mode ErrorIfExists should not fail if the table not exists but a same-name temp view exist95 msPassed
saveAsTable with mode Ignore should create the table if the table not exists but a same-name temp view exist97 msPassed
saveAsTable with mode Overwrite should not drop the temp view if the table not exists but a same-name temp view exist95 msPassed
saveAsTable with mode Overwrite should not fail if the table already exists and a same-name temp view exist0.32 secPassed
test different data types for options12 msPassed
test path option in load14 msPassed
text - API and behavior regarding schema0.74 secPassed
textFile - API and behavior regarding schema0.46 secPassed
use Spark jobs to list files0.49 secPassed
write path implements onTaskCommit API correctly0.18 secPassed
writeStream cannot be called on non-streaming datasets12 msPassed