Test Result : FileBasedDataSourceSuite

0 failures (±0)
53 tests (±0)
Took 44 sec.

All Tests

Test nameDurationStatus
Do not use cache on append0.96 secPassed
Do not use cache on overwrite1 secPassed
Enabling/disabling ignoreMissingFiles using csv3.6 secPassed
Enabling/disabling ignoreMissingFiles using json4.8 secPassed
Enabling/disabling ignoreMissingFiles using orc2.9 secPassed
Enabling/disabling ignoreMissingFiles using parquet2.9 secPassed
Enabling/disabling ignoreMissingFiles using text3.2 secPassed
File source v2: support partition pruning2.2 secPassed
File source v2: support passing data filters to FileScan without partitionFilters1.9 secPassed
File table location should include both values of option `path` and `paths`0.62 secPassed
Option pathGlobFilter: filter files correctly0.48 secPassed
Option pathGlobFilter: simple extension filtering should contains partition info0.69 secPassed
Option recursiveFileLookup: disable partition inferring62 msPassed
Option recursiveFileLookup: recursive loading correctly0.11 secPassed
Return correct results when data columns overlap with partition columns1 secPassed
Return correct results when data columns overlap with partition columns (nested data)1.1 secPassed
SPARK-15474 Write and read back non-empty schema with empty dataframe - orc0.29 secPassed
SPARK-15474 Write and read back non-empty schema with empty dataframe - parquet0.41 secPassed
SPARK-22146 read files containing special characters using csv0.38 secPassed
SPARK-22146 read files containing special characters using json0.29 secPassed
SPARK-22146 read files containing special characters using orc0.27 secPassed
SPARK-22146 read files containing special characters using parquet0.37 secPassed
SPARK-22146 read files containing special characters using text0.25 secPassed
SPARK-23072 Write and read back unicode column names - csv0.32 secPassed
SPARK-23072 Write and read back unicode column names - json0.28 secPassed
SPARK-23072 Write and read back unicode column names - orc0.27 secPassed
SPARK-23072 Write and read back unicode column names - parquet0.4 secPassed
SPARK-23148 read files containing special characters using csv with multiline enabled0.31 secPassed
SPARK-23148 read files containing special characters using json with multiline enabled0.3 secPassed
SPARK-23271 empty RDD when saved should write a metadata only file - orc0.3 secPassed
SPARK-23271 empty RDD when saved should write a metadata only file - parquet0.34 secPassed
SPARK-23372 error while writing empty schema files using csv9 msPassed
SPARK-23372 error while writing empty schema files using json17 msPassed
SPARK-23372 error while writing empty schema files using orc19 msPassed
SPARK-23372 error while writing empty schema files using parquet8 msPassed
SPARK-23372 error while writing empty schema files using text11 msPassed
SPARK-24204 error handling for unsupported Array/Map/Struct types - csv0.89 secPassed
SPARK-24204 error handling for unsupported Interval data types - csv, json, parquet, orc0.69 secPassed
SPARK-24204 error handling for unsupported Null data types - csv, parquet, orc1 secPassed
SPARK-24691 error handling for unsupported types - text0.34 secPassed
SPARK-25237 compute correct input metrics in FileScanRDD0.34 secPassed
SPARK-31116: Select nested schema with case insensitive mode3.5 secPassed
SPARK-31935: Hadoop file system config should be effective in data source options0.71 secPassed
UDF input_file_name()0.47 secPassed
Writing empty datasets should not fail - csv0.14 secPassed
Writing empty datasets should not fail - json0.12 secPassed
Writing empty datasets should not fail - orc0.16 secPassed
Writing empty datasets should not fail - parquet0.12 secPassed
Writing empty datasets should not fail - text0.15 secPassed
caseSensitive - orc0.85 secPassed
caseSensitive - parquet0.81 secPassed
compressionFactor takes effect0.65 secPassed
sizeInBytes should be the total size of all files0.34 secPassed