Test Result : FileBasedDataSourceSuite

0 failures
53 tests
Took 25 sec.

All Tests

Test nameDurationStatus
Do not use cache on append0.67 secPassed
Do not use cache on overwrite0.69 secPassed
Enabling/disabling ignoreMissingFiles using csv1.9 secPassed
Enabling/disabling ignoreMissingFiles using json1.6 secPassed
Enabling/disabling ignoreMissingFiles using orc1.6 secPassed
Enabling/disabling ignoreMissingFiles using parquet1.6 secPassed
Enabling/disabling ignoreMissingFiles using text1.4 secPassed
File source v2: support partition pruning1.4 secPassed
File source v2: support passing data filters to FileScan without partitionFilters1.1 secPassed
Option recursiveFileLookup: disable partition inferring36 msPassed
Option recursiveFileLookup: recursive loading correctly97 msPassed
Return correct results when data columns overlap with partition columns0.69 secPassed
Return correct results when data columns overlap with partition columns (nested data)0.61 secPassed
SPARK-15474 Write and read back non-empty schema with empty dataframe - orc0.2 secPassed
SPARK-15474 Write and read back non-empty schema with empty dataframe - parquet0.2 secPassed
SPARK-22146 read files containing special characters using csv0.2 secPassed
SPARK-22146 read files containing special characters using json0.16 secPassed
SPARK-22146 read files containing special characters using orc0.15 secPassed
SPARK-22146 read files containing special characters using parquet0.18 secPassed
SPARK-22146 read files containing special characters using text0.13 secPassed
SPARK-22790,SPARK-27668: spark.sql.sources.compressionFactor takes effect0.37 secPassed
SPARK-23072 Write and read back unicode column names - csv0.2 secPassed
SPARK-23072 Write and read back unicode column names - json0.17 secPassed
SPARK-23072 Write and read back unicode column names - orc0.17 secPassed
SPARK-23072 Write and read back unicode column names - parquet0.2 secPassed
SPARK-23148 read files containing special characters using csv with multiline enabled0.17 secPassed
SPARK-23148 read files containing special characters using json with multiline enabled0.2 secPassed
SPARK-23271 empty RDD when saved should write a metadata only file - orc0.25 secPassed
SPARK-23271 empty RDD when saved should write a metadata only file - parquet0.19 secPassed
SPARK-23372 error while writing empty schema files using csv13 msPassed
SPARK-23372 error while writing empty schema files using json13 msPassed
SPARK-23372 error while writing empty schema files using orc18 msPassed
SPARK-23372 error while writing empty schema files using parquet14 msPassed
SPARK-23372 error while writing empty schema files using text13 msPassed
SPARK-24204 error handling for unsupported Array/Map/Struct types - csv0.53 secPassed
SPARK-24204 error handling for unsupported Interval data types - csv, json, parquet, orc0.34 secPassed
SPARK-24204 error handling for unsupported Null data types - csv, parquet, orc0.56 secPassed
SPARK-24691 error handling for unsupported types - text0.25 secPassed
SPARK-25237 compute correct input metrics in FileScanRDD0.2 secPassed
SPARK-31116: Select nested schema with case insensitive mode1.2 secPassed
SPARK-32827: Set max metadata string length29 msPassed
SPARK-32889: column name supports special characters using json1.4 secPassed
SPARK-32889: column name supports special characters using orc0.94 secPassed
Spark native readers should respect spark.sql.caseSensitive - orc0.49 secPassed
Spark native readers should respect spark.sql.caseSensitive - parquet0.73 secPassed
UDF input_file_name()0.35 secPassed
Writing empty datasets should not fail - csv78 msPassed
Writing empty datasets should not fail - json77 msPassed
Writing empty datasets should not fail - orc87 msPassed
Writing empty datasets should not fail - parquet77 msPassed
Writing empty datasets should not fail - text76 msPassed
sizeInBytes should be the total size of all files0.18 secPassed
test casts pushdown on orc/parquet for integral types0.92 secPassed