Test Result : FileBasedDataSourceSuite

0 failures (±0)
53 tests (+15)
Took 2 min 11 sec.

All Tests

Test nameDurationStatus
Do not use cache on append3.3 secPassed
Do not use cache on overwrite4.6 secPassed
Enabling/disabling ignoreMissingFiles using csv6.6 secPassed
Enabling/disabling ignoreMissingFiles using json7.9 secPassed
Enabling/disabling ignoreMissingFiles using orc7.1 secPassed
Enabling/disabling ignoreMissingFiles using parquet7.6 secPassed
Enabling/disabling ignoreMissingFiles using text5.3 secPassed
File source v2: support partition pruning7.5 secPassed
File source v2: support passing data filters to FileScan without partitionFilters4.4 secPassed
File table location should include both values of option `path` and `paths`1.4 secPassed
Option pathGlobFilter: filter files correctly0.93 secPassed
Option pathGlobFilter: simple extension filtering should contains partition info1.3 secPassed
Option recursiveFileLookup: disable partition inferring0.25 secPassed
Option recursiveFileLookup: recursive loading correctly0.42 secPassed
Return correct results when data columns overlap with partition columns4.4 secPassed
Return correct results when data columns overlap with partition columns (nested data)4.3 secPassed
SPARK-15474 Write and read back non-empty schema with empty dataframe - orc1.3 secPassed
SPARK-15474 Write and read back non-empty schema with empty dataframe - parquet0.93 secPassed
SPARK-22146 read files containing special characters using csv0.77 secPassed
SPARK-22146 read files containing special characters using json0.6 secPassed
SPARK-22146 read files containing special characters using orc1.5 secPassed
SPARK-22146 read files containing special characters using parquet1.6 secPassed
SPARK-22146 read files containing special characters using text0.54 secPassed
SPARK-23072 Write and read back unicode column names - csv3.7 secPassed
SPARK-23072 Write and read back unicode column names - json1.7 secPassed
SPARK-23072 Write and read back unicode column names - orc5.1 secPassed
SPARK-23072 Write and read back unicode column names - parquet2.3 secPassed
SPARK-23148 read files containing special characters using csv with multiline enabled1 secPassed
SPARK-23148 read files containing special characters using json with multiline enabled0.79 secPassed
SPARK-23271 empty RDD when saved should write a metadata only file - orc0.58 secPassed
SPARK-23271 empty RDD when saved should write a metadata only file - parquet0.59 secPassed
SPARK-23372 error while writing empty schema files using csv0.17 secPassed
SPARK-23372 error while writing empty schema files using json0.1 secPassed
SPARK-23372 error while writing empty schema files using orc0.22 secPassed
SPARK-23372 error while writing empty schema files using parquet0.38 secPassed
SPARK-23372 error while writing empty schema files using text0.12 secPassed
SPARK-24204 error handling for unsupported Array/Map/Struct types - csv2.7 secPassed
SPARK-24204 error handling for unsupported Interval data types - csv, json, parquet, orc4.9 secPassed
SPARK-24204 error handling for unsupported Null data types - csv, parquet, orc2.9 secPassed
SPARK-24691 error handling for unsupported types - text1.9 secPassed
SPARK-25237 compute correct input metrics in FileScanRDD1.5 secPassed
SPARK-31116: Select nested schema with case insensitive mode3.5 secPassed
SPARK-31935: Hadoop file system config should be effective in data source options2.4 secPassed
UDF input_file_name()1 secPassed
Writing empty datasets should not fail - csv0.68 secPassed
Writing empty datasets should not fail - json0.43 secPassed
Writing empty datasets should not fail - orc8 secPassed
Writing empty datasets should not fail - parquet1.4 secPassed
Writing empty datasets should not fail - text0.33 secPassed
caseSensitive - orc2.1 secPassed
caseSensitive - parquet2.3 secPassed
compressionFactor takes effect1.7 secPassed
sizeInBytes should be the total size of all files1.1 secPassed