Console Output

Skipping 26,339 KB.. Full Log
06:03:02.572 ERROR org.apache.hadoop.hive.ql.io.AcidUtils: Failed to get files with ID; using regular API: Only supported for DFS; got class org.apache.hadoop.hive.ql.io.ProxyLocalFileSystem
06:03:03.093 ERROR org.apache.hadoop.hive.ql.io.AcidUtils: Failed to get files with ID; using regular API: Only supported for DFS; got class org.apache.hadoop.hive.ql.io.ProxyLocalFileSystem
06:03:03.115 ERROR org.apache.hadoop.hive.ql.io.AcidUtils: Failed to get files with ID; using regular API: Only supported for DFS; got class org.apache.hadoop.hive.ql.io.ProxyLocalFileSystem
06:03:03.512 ERROR org.apache.hadoop.hive.ql.io.AcidUtils: Failed to get files with ID; using regular API: Only supported for DFS; got class org.apache.hadoop.hive.ql.io.ProxyLocalFileSystem
06:03:03.531 ERROR org.apache.hadoop.hive.ql.io.AcidUtils: Failed to get files with ID; using regular API: Only supported for DFS; got class org.apache.hadoop.hive.ql.io.ProxyLocalFileSystem
[info] - SPARK-25993 CREATE EXTERNAL TABLE with subdirectories (12 seconds, 88 milliseconds)
[info] - SPARK-31580: Read a file written before ORC-569 (156 milliseconds)
06:03:04.435 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:03:04.435 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:03:04.436 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
06:03:04.506 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:03:04.507 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:03:04.507 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
[info] HiveShowCreateTableSuite:
06:03:04.542 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
06:03:04.827 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - data source table with user specified schema (579 milliseconds)
06:03:05.385 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
06:03:05.546 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - data source table CTAS (670 milliseconds)
06:03:06.200 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
06:03:06.589 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - partitioned data source table (1 second, 34 milliseconds)
06:03:07.189 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
06:03:07.344 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - bucketed data source table (788 milliseconds)
06:03:08.020 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
06:03:08.319 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - partitioned bucketed data source table (926 milliseconds)
06:03:08.828 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
06:03:08.966 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - data source table with a comment (703 milliseconds)
06:03:09.508 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
06:03:09.648 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - data source table with table properties (632 milliseconds)
[info] - data source table using Dataset API (1 second, 657 milliseconds)
[info] - temp view (32 milliseconds)
06:03:11.570 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`t1` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
06:03:11.894 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`t1` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - SPARK-24911: keep quotes for nested fields (606 milliseconds)
[info] - view (482 milliseconds)
[info] - view with output columns (494 milliseconds)
[info] - view with table comment and properties (474 milliseconds)
06:03:13.622 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/t1 specified for non-external table:t1
06:03:13.884 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/t1 specified for non-external table:t1
[info] - simple hive table (497 milliseconds)
[info] - simple external hive table (266 milliseconds)
06:03:14.393 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/t1 specified for non-external table:t1
06:03:14.747 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/t1 specified for non-external table:t1
[info] - partitioned hive table (598 milliseconds)
06:03:14.986 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/t1 specified for non-external table:t1
06:03:15.288 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/t1 specified for non-external table:t1
[info] - hive table with explicit storage info (565 milliseconds)
06:03:15.551 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/t1 specified for non-external table:t1
06:03:15.868 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/t1 specified for non-external table:t1
[info] - hive table with STORED AS clause (624 milliseconds)
06:03:16.176 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/t1 specified for non-external table:t1
06:03:16.452 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/t1 specified for non-external table:t1
[info] - hive table with serde info (511 milliseconds)
06:03:16.686 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/t1 specified for non-external table:t1
06:03:16.984 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/t1 specified for non-external table:t1
[info] - hive bucketing is supported (566 milliseconds)
06:03:17.255 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/t1 specified for non-external table:t1
[info] - hive partitioned view is not supported (468 milliseconds)
06:03:17.726 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/t1 specified for non-external table:t1
06:03:18.023 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/t1 specified for non-external table:t1
[info] - SPARK-24911: keep quotes for nested fields in hive (527 milliseconds)
06:03:18.249 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/t1 specified for non-external table:t1
[info] - simple hive table in Spark DDL (540 milliseconds)
[info] - show create table as serde can't work on data source table (257 milliseconds)
[info] - simple external hive table in Spark DDL (318 milliseconds)
06:03:19.369 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/t1 specified for non-external table:t1
[info] - hive table with STORED AS clause in Spark DDL (524 milliseconds)
06:03:19.891 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/t1 specified for non-external table:t1
[info] - hive table with nested fields with STORED AS clause in Spark DDL (525 milliseconds)
06:03:20.416 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/t1 specified for non-external table:t1
[info] - hive table with unsupported fileformat in Spark DDL (217 milliseconds)
06:03:20.632 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/t1 specified for non-external table:t1
[info] - hive table with serde info in Spark DDL (484 milliseconds)
06:03:21.117 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/t1 specified for non-external table:t1
[info] - partitioned, bucketed hive table in Spark DDL (498 milliseconds)
06:03:21.616 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/t1 specified for non-external table:t1
[info] - show create table for transactional hive table (263 milliseconds)
06:03:21.955 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:03:21.955 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:03:21.955 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
06:03:22.045 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:03:22.045 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:03:22.045 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
[info] HiveResolutionSuite:
[info] - SPARK-3698: case insensitive test for nested data (58 milliseconds)
[info] - SPARK-5278: check ambiguous reference to fields (59 milliseconds)
06:03:22.255 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:03:22.255 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:03:22.255 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
06:03:22.327 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:03:22.327 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:03:22.327 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
06:03:22.344 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/src specified for non-external table:src
[info] - table.attr (703 milliseconds)
06:03:22.939 WARN org.apache.hadoop.hive.common.FileUtils: File file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/src does not exist; Force to delete it.
06:03:22.939 ERROR org.apache.hadoop.hive.common.FileUtils: Failed to delete file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/src
06:03:22.982 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:03:22.982 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:03:22.982 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
06:03:23.053 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:03:23.053 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:03:23.054 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
06:03:23.070 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/src specified for non-external table:src
[info] - database.table (734 milliseconds)
06:03:23.675 WARN org.apache.hadoop.hive.common.FileUtils: File file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/src does not exist; Force to delete it.
06:03:23.675 ERROR org.apache.hadoop.hive.common.FileUtils: Failed to delete file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/src
06:03:23.774 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:03:23.774 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:03:23.774 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
06:03:23.845 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:03:23.845 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:03:23.845 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
06:03:23.862 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/src specified for non-external table:src
[info] - database.table table.attr (783 milliseconds)
06:03:24.462 WARN org.apache.hadoop.hive.common.FileUtils: File file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/src does not exist; Force to delete it.
06:03:24.462 ERROR org.apache.hadoop.hive.common.FileUtils: Failed to delete file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/src
06:03:24.507 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:03:24.507 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:03:24.507 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
06:03:24.584 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:03:24.584 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:03:24.584 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
06:03:24.602 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/src specified for non-external table:src
[info] - database.table table.attr case insensitive (728 milliseconds)
06:03:25.196 WARN org.apache.hadoop.hive.common.FileUtils: File file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/src does not exist; Force to delete it.
06:03:25.196 ERROR org.apache.hadoop.hive.common.FileUtils: Failed to delete file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/src
06:03:25.243 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:03:25.243 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:03:25.243 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
06:03:25.318 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:03:25.319 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:03:25.319 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
06:03:25.336 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/src specified for non-external table:src
[info] - alias.attr (758 milliseconds)
06:03:25.950 WARN org.apache.hadoop.hive.common.FileUtils: File file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/src does not exist; Force to delete it.
06:03:25.950 ERROR org.apache.hadoop.hive.common.FileUtils: Failed to delete file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/src
06:03:25.994 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:03:25.994 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:03:25.995 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
06:03:26.070 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:03:26.070 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:03:26.070 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
06:03:26.088 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/src specified for non-external table:src
[info] - subquery-alias.attr (689 milliseconds)
06:03:26.624 WARN org.apache.hadoop.hive.common.FileUtils: File file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/src does not exist; Force to delete it.
06:03:26.624 ERROR org.apache.hadoop.hive.common.FileUtils: Failed to delete file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/src
06:03:26.668 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:03:26.668 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:03:26.668 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
06:03:26.753 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:03:26.753 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:03:26.753 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
06:03:26.780 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/src specified for non-external table:src
[info] - quoted alias.attr (766 milliseconds)
06:03:27.404 WARN org.apache.hadoop.hive.common.FileUtils: File file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/src does not exist; Force to delete it.
06:03:27.405 ERROR org.apache.hadoop.hive.common.FileUtils: Failed to delete file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/src
06:03:27.450 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:03:27.450 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:03:27.450 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
06:03:27.526 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:03:27.526 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:03:27.526 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
06:03:27.544 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/src specified for non-external table:src
[info] - attr (639 milliseconds)
06:03:28.051 WARN org.apache.hadoop.hive.common.FileUtils: File file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/src does not exist; Force to delete it.
06:03:28.051 ERROR org.apache.hadoop.hive.common.FileUtils: Failed to delete file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/src
06:03:28.097 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:03:28.097 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:03:28.098 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
06:03:28.178 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:03:28.178 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:03:28.178 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
06:03:28.197 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/src specified for non-external table:src
[info] - alias.star (714 milliseconds)
[info] - case insensitivity with scala reflection (136 milliseconds)
[info] - case insensitivity with scala reflection joins !!! IGNORED !!!
[info] - nested repeated resolution (100 milliseconds)
06:03:29.000 WARN org.apache.hadoop.hive.common.FileUtils: File file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/src does not exist; Force to delete it.
06:03:29.000 ERROR org.apache.hadoop.hive.common.FileUtils: Failed to delete file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/src
06:03:29.043 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:03:29.043 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:03:29.043 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
06:03:29.114 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:03:29.114 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:03:29.115 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
06:03:29.132 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/src specified for non-external table:src
[info] - test ambiguousReferences resolved as hive (2 seconds, 38 milliseconds)
06:03:31.029 WARN org.apache.hadoop.hive.common.FileUtils: File file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/src does not exist; Force to delete it.
06:03:31.029 ERROR org.apache.hadoop.hive.common.FileUtils: Failed to delete file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/src
06:03:31.050 WARN org.apache.hadoop.hive.common.FileUtils: File file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/t1 does not exist; Force to delete it.
06:03:31.051 ERROR org.apache.hadoop.hive.common.FileUtils: Failed to delete file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/t1
06:03:31.072 WARN org.apache.hadoop.hive.common.FileUtils: File file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/t2 does not exist; Force to delete it.
06:03:31.072 ERROR org.apache.hadoop.hive.common.FileUtils: Failed to delete file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/t2
06:03:31.115 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:03:31.115 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:03:31.115 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
06:03:31.186 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:03:31.186 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:03:31.186 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
[info] HiveOrcQuerySuite:
[info] - Read/write All Types (653 milliseconds)
[info] - Read/write binary data (429 milliseconds)
06:03:32.580 WARN org.apache.spark.scheduler.TaskSetManager: Stage 15363 contains a task of very large size (1267 KiB). The maximum recommended task size is 1000 KiB.
[info] - Read/write all types with non-primitive type (2 seconds, 515 milliseconds)
[info] - Read/write UserDefinedType (626 milliseconds)
[info] - Creating case class RDD table (209 milliseconds)
[info] - Simple selection form ORC table (1 second, 685 milliseconds)
[info] - save and load case class RDD with `None`s as orc (493 milliseconds)
[info] - SPARK-16610: Respect orc.compress (i.e., OrcConf.COMPRESS) when compression is unset (671 milliseconds)
[info] - Compression options for writing to an ORC file (SNAPPY, ZLIB and NONE) (985 milliseconds)
[info] - simple select queries (1 second, 50 milliseconds)
[info] - appending (882 milliseconds)
[info] - overwriting (1 second, 235 milliseconds)
[info] - self-join (1 second, 64 milliseconds)
[info] - nested data - struct with array field (849 milliseconds)
[info] - nested data - array of struct (777 milliseconds)
[info] - columns only referenced by pushed down filters should remain (725 milliseconds)
[info] - SPARK-5309 strings stored using dictionary compression in orc (2 seconds, 746 milliseconds)
[info] - SPARK-9170: Don't implicitly lowercase of user-provided columns (1 second, 10 milliseconds)
[info] - SPARK-10623 Enable ORC PPD (6 seconds, 897 milliseconds)
[info] - SPARK-14962 Produce correct results on array type with isnotnull (528 milliseconds)
[info] - SPARK-15198 Support for pushing down filters for boolean types (443 milliseconds)
[info] - Support for pushing down filters for decimal types (1 second, 508 milliseconds)
[info] - Support for pushing down filters for timestamp types (1 second, 507 milliseconds)
[info] - column nullability and comment - write and then read (576 milliseconds)
[info] - Empty schema does not read data from ORC file (302 milliseconds)
[info] - read from multiple orc input paths (631 milliseconds)
06:04:03.220 WARN org.apache.spark.sql.hive.orc.OrcFileOperator: Skipped the footer in the corrupted file: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/spark-5a8ac66e-4f13-4808-8da6-1cb9d87a9a67/third/part-00000-f561e694-6318-4577-bf5d-3f16c2588713-c000.json
org.apache.orc.FileFormatException: Malformed ORC file file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/spark-5a8ac66e-4f13-4808-8da6-1cb9d87a9a67/third/part-00000-f561e694-6318-4577-bf5d-3f16c2588713-c000.json. Invalid postscript.
	at org.apache.orc.impl.ReaderImpl.ensureOrcFooter(ReaderImpl.java:275)
	at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:582)
	at org.apache.orc.impl.ReaderImpl.<init>(ReaderImpl.java:370)
	at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:63)
	at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:55)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:76)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.getFileReader(OrcFileOperator.scala:87)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$readSchema$1(OrcFileOperator.scala:96)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.readSchema(OrcFileOperator.scala:96)
	at org.apache.spark.sql.hive.orc.OrcFileFormat.$anonfun$buildReader$2(OrcFileFormat.scala:163)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:147)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.internalIter$lzycompute(FileScanRDD.scala:141)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.internalIter(FileScanRDD.scala:141)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.getNext(FileScanRDD.scala:145)
	at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:173)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1804)
	at org.apache.spark.rdd.RDD.$anonfun$count$1(RDD.scala:1227)
	at org.apache.spark.rdd.RDD.$anonfun$count$1$adapted(RDD.scala:1227)
	at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2133)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
06:04:03.321 WARN org.apache.spark.sql.hive.orc.OrcFileOperator: Skipped the footer in the corrupted file: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/spark-5a8ac66e-4f13-4808-8da6-1cb9d87a9a67/third/part-00000-f561e694-6318-4577-bf5d-3f16c2588713-c000.json
org.apache.orc.FileFormatException: Malformed ORC file file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/spark-5a8ac66e-4f13-4808-8da6-1cb9d87a9a67/third/part-00000-f561e694-6318-4577-bf5d-3f16c2588713-c000.json. Invalid postscript.
	at org.apache.orc.impl.ReaderImpl.ensureOrcFooter(ReaderImpl.java:275)
	at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:582)
	at org.apache.orc.impl.ReaderImpl.<init>(ReaderImpl.java:370)
	at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:63)
	at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:55)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:76)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.getFileReader(OrcFileOperator.scala:87)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$readSchema$1(OrcFileOperator.scala:96)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.readSchema(OrcFileOperator.scala:96)
	at org.apache.spark.sql.hive.orc.OrcFileFormat.$anonfun$buildReader$2(OrcFileFormat.scala:163)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:147)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.internalIter$lzycompute(FileScanRDD.scala:141)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.internalIter(FileScanRDD.scala:141)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.getNext(FileScanRDD.scala:145)
	at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:173)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:340)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
06:04:04.274 WARN org.apache.spark.sql.hive.orc.OrcFileOperator: Skipped the footer in the corrupted file: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/spark-d59e6041-0016-4341-8a5d-a28c209a41ba/third/part-00000-db26b21b-64b4-4fec-8659-73792739057b-c000.json
org.apache.orc.FileFormatException: Malformed ORC file file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/spark-d59e6041-0016-4341-8a5d-a28c209a41ba/third/part-00000-db26b21b-64b4-4fec-8659-73792739057b-c000.json. Invalid postscript.
	at org.apache.orc.impl.ReaderImpl.ensureOrcFooter(ReaderImpl.java:275)
	at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:582)
	at org.apache.orc.impl.ReaderImpl.<init>(ReaderImpl.java:370)
	at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:63)
	at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:55)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:76)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.getFileReader(OrcFileOperator.scala:87)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$readSchema$1(OrcFileOperator.scala:96)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.readSchema(OrcFileOperator.scala:96)
	at org.apache.spark.sql.hive.orc.OrcFileFormat.$anonfun$buildReader$2(OrcFileFormat.scala:163)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:147)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.internalIter$lzycompute(FileScanRDD.scala:141)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.internalIter(FileScanRDD.scala:141)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.getNext(FileScanRDD.scala:145)
	at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:173)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1804)
	at org.apache.spark.rdd.RDD.$anonfun$count$1(RDD.scala:1227)
	at org.apache.spark.rdd.RDD.$anonfun$count$1$adapted(RDD.scala:1227)
	at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2133)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
06:04:04.357 WARN org.apache.spark.sql.hive.orc.OrcFileOperator: Skipped the footer in the corrupted file: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/spark-d59e6041-0016-4341-8a5d-a28c209a41ba/third/part-00000-db26b21b-64b4-4fec-8659-73792739057b-c000.json
org.apache.orc.FileFormatException: Malformed ORC file file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/spark-d59e6041-0016-4341-8a5d-a28c209a41ba/third/part-00000-db26b21b-64b4-4fec-8659-73792739057b-c000.json. Invalid postscript.
	at org.apache.orc.impl.ReaderImpl.ensureOrcFooter(ReaderImpl.java:275)
	at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:582)
	at org.apache.orc.impl.ReaderImpl.<init>(ReaderImpl.java:370)
	at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:63)
	at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:55)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:76)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.getFileReader(OrcFileOperator.scala:87)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$readSchema$1(OrcFileOperator.scala:96)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.readSchema(OrcFileOperator.scala:96)
	at org.apache.spark.sql.hive.orc.OrcFileFormat.$anonfun$buildReader$2(OrcFileFormat.scala:163)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:147)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.internalIter$lzycompute(FileScanRDD.scala:141)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.internalIter(FileScanRDD.scala:141)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.getNext(FileScanRDD.scala:145)
	at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:173)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:340)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
06:04:05.008 WARN org.apache.spark.sql.hive.orc.OrcFileOperator: Skipped the footer in the corrupted file: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/spark-9926474c-88a9-4d5b-9bbc-aedc8ddbf8c7/first/part-00000-4e92cbc9-9dce-49fa-99b5-1f4675783ed6-c000.json
org.apache.orc.FileFormatException: Malformed ORC file file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/spark-9926474c-88a9-4d5b-9bbc-aedc8ddbf8c7/first/part-00000-4e92cbc9-9dce-49fa-99b5-1f4675783ed6-c000.json. Invalid postscript.
	at org.apache.orc.impl.ReaderImpl.ensureOrcFooter(ReaderImpl.java:275)
	at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:582)
	at org.apache.orc.impl.ReaderImpl.<init>(ReaderImpl.java:370)
	at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:63)
	at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:55)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:76)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.getFileReader(OrcFileOperator.scala:87)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$readSchema$1(OrcFileOperator.scala:96)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.readSchema(OrcFileOperator.scala:96)
	at org.apache.spark.sql.hive.orc.OrcFileFormat.inferSchema(OrcFileFormat.scala:81)
	at org.apache.spark.sql.execution.datasources.DataSource.$anonfun$getOrInferFileFormatSchema$11(DataSource.scala:193)
	at scala.Option.orElse(Option.scala:447)
	at org.apache.spark.sql.execution.datasources.DataSource.getOrInferFileFormatSchema(DataSource.scala:190)
	at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:401)
	at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:279)
	at org.apache.spark.sql.DataFrameReader.$anonfun$load$2(DataFrameReader.scala:268)
	at scala.Option.getOrElse(Option.scala:189)
	at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:268)
	at org.apache.spark.sql.DataFrameReader.orc(DataFrameReader.scala:770)
	at org.apache.spark.sql.execution.datasources.orc.OrcQueryTest.$anonfun$new$153(OrcQuerySuite.scala:571)
	at org.apache.spark.sql.execution.datasources.orc.OrcQueryTest.$anonfun$new$153$adapted(OrcQuerySuite.scala:565)
	at org.apache.spark.sql.test.SQLTestUtils.$anonfun$withTempDir$1(SQLTestUtils.scala:78)
	at org.apache.spark.sql.test.SQLTestUtils.$anonfun$withTempDir$1$adapted(SQLTestUtils.scala:77)
	at org.apache.spark.SparkFunSuite.withTempDir(SparkFunSuite.scala:163)
	at org.apache.spark.sql.execution.datasources.orc.OrcTest.org$apache$spark$sql$test$SQLTestUtils$$super$withTempDir(OrcTest.scala:51)
	at org.apache.spark.sql.test.SQLTestUtils.withTempDir(SQLTestUtils.scala:77)
	at org.apache.spark.sql.test.SQLTestUtils.withTempDir$(SQLTestUtils.scala:76)
	at org.apache.spark.sql.execution.datasources.orc.OrcTest.withTempDir(OrcTest.scala:51)
	at org.apache.spark.sql.execution.datasources.orc.OrcQueryTest.testAllCorruptFiles$1(OrcQuerySuite.scala:565)
	at org.apache.spark.sql.execution.datasources.orc.OrcQueryTest.$anonfun$new$156(OrcQuerySuite.scala:592)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.scalatest.Assertions.intercept(Assertions.scala:807)
	at org.scalatest.Assertions.intercept$(Assertions.scala:804)
	at org.scalatest.FunSuite.intercept(FunSuite.scala:1560)
	at org.apache.spark.sql.execution.datasources.orc.OrcQueryTest.$anonfun$new$155(OrcQuerySuite.scala:591)
	at org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf(SQLHelper.scala:52)
	at org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf$(SQLHelper.scala:36)
	at org.apache.spark.sql.execution.datasources.orc.OrcTest.org$apache$spark$sql$test$SQLTestUtilsBase$$super$withSQLConf(OrcTest.scala:51)
	at org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf(SQLTestUtils.scala:246)
	at org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf$(SQLTestUtils.scala:244)
	at org.apache.spark.sql.execution.datasources.orc.OrcTest.withSQLConf(OrcTest.scala:51)
	at org.apache.spark.sql.execution.datasources.orc.OrcQueryTest.$anonfun$new$148(OrcQuerySuite.scala:588)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:151)
	at org.scalatest.FunSuiteLike.invokeWithFixture$1(FunSuiteLike.scala:184)
	at org.scalatest.FunSuiteLike.$anonfun$runTest$1(FunSuiteLike.scala:196)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:286)
	at org.scalatest.FunSuiteLike.runTest(FunSuiteLike.scala:196)
	at org.scalatest.FunSuiteLike.runTest$(FunSuiteLike.scala:178)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:58)
	at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:221)
	at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:214)
	at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:58)
	at org.scalatest.FunSuiteLike.$anonfun$runTests$1(FunSuiteLike.scala:229)
	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:393)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:381)
	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:376)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:458)
	at org.scalatest.FunSuiteLike.runTests(FunSuiteLike.scala:229)
	at org.scalatest.FunSuiteLike.runTests$(FunSuiteLike.scala:228)
	at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
	at org.scalatest.Suite.run(Suite.scala:1124)
	at org.scalatest.Suite.run$(Suite.scala:1106)
	at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
	at org.scalatest.FunSuiteLike.$anonfun$run$1(FunSuiteLike.scala:233)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:518)
	at org.scalatest.FunSuiteLike.run(FunSuiteLike.scala:233)
	at org.scalatest.FunSuiteLike.run$(FunSuiteLike.scala:232)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:58)
	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:58)
	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:317)
	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:510)
	at sbt.ForkMain$Run$2.call(ForkMain.java:296)
	at sbt.ForkMain$Run$2.call(ForkMain.java:286)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
06:04:05.022 WARN org.apache.spark.sql.hive.orc.OrcFileOperator: Skipped the footer in the corrupted file: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/spark-9926474c-88a9-4d5b-9bbc-aedc8ddbf8c7/second/part-00000-00559608-7b71-4fc4-9c94-7e8b69a1bdea-c000.json
org.apache.orc.FileFormatException: Malformed ORC file file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/spark-9926474c-88a9-4d5b-9bbc-aedc8ddbf8c7/second/part-00000-00559608-7b71-4fc4-9c94-7e8b69a1bdea-c000.json. Invalid postscript.
	at org.apache.orc.impl.ReaderImpl.ensureOrcFooter(ReaderImpl.java:275)
	at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:582)
	at org.apache.orc.impl.ReaderImpl.<init>(ReaderImpl.java:370)
	at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:63)
	at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:55)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:76)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.getFileReader(OrcFileOperator.scala:87)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$readSchema$1(OrcFileOperator.scala:96)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.readSchema(OrcFileOperator.scala:96)
	at org.apache.spark.sql.hive.orc.OrcFileFormat.inferSchema(OrcFileFormat.scala:81)
	at org.apache.spark.sql.execution.datasources.DataSource.$anonfun$getOrInferFileFormatSchema$11(DataSource.scala:193)
	at scala.Option.orElse(Option.scala:447)
	at org.apache.spark.sql.execution.datasources.DataSource.getOrInferFileFormatSchema(DataSource.scala:190)
	at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:401)
	at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:279)
	at org.apache.spark.sql.DataFrameReader.$anonfun$load$2(DataFrameReader.scala:268)
	at scala.Option.getOrElse(Option.scala:189)
	at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:268)
	at org.apache.spark.sql.DataFrameReader.orc(DataFrameReader.scala:770)
	at org.apache.spark.sql.execution.datasources.orc.OrcQueryTest.$anonfun$new$153(OrcQuerySuite.scala:571)
	at org.apache.spark.sql.execution.datasources.orc.OrcQueryTest.$anonfun$new$153$adapted(OrcQuerySuite.scala:565)
	at org.apache.spark.sql.test.SQLTestUtils.$anonfun$withTempDir$1(SQLTestUtils.scala:78)
	at org.apache.spark.sql.test.SQLTestUtils.$anonfun$withTempDir$1$adapted(SQLTestUtils.scala:77)
	at org.apache.spark.SparkFunSuite.withTempDir(SparkFunSuite.scala:163)
	at org.apache.spark.sql.execution.datasources.orc.OrcTest.org$apache$spark$sql$test$SQLTestUtils$$super$withTempDir(OrcTest.scala:51)
	at org.apache.spark.sql.test.SQLTestUtils.withTempDir(SQLTestUtils.scala:77)
	at org.apache.spark.sql.test.SQLTestUtils.withTempDir$(SQLTestUtils.scala:76)
	at org.apache.spark.sql.execution.datasources.orc.OrcTest.withTempDir(OrcTest.scala:51)
	at org.apache.spark.sql.execution.datasources.orc.OrcQueryTest.testAllCorruptFiles$1(OrcQuerySuite.scala:565)
	at org.apache.spark.sql.execution.datasources.orc.OrcQueryTest.$anonfun$new$156(OrcQuerySuite.scala:592)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.scalatest.Assertions.intercept(Assertions.scala:807)
	at org.scalatest.Assertions.intercept$(Assertions.scala:804)
	at org.scalatest.FunSuite.intercept(FunSuite.scala:1560)
	at org.apache.spark.sql.execution.datasources.orc.OrcQueryTest.$anonfun$new$155(OrcQuerySuite.scala:591)
	at org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf(SQLHelper.scala:52)
	at org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf$(SQLHelper.scala:36)
	at org.apache.spark.sql.execution.datasources.orc.OrcTest.org$apache$spark$sql$test$SQLTestUtilsBase$$super$withSQLConf(OrcTest.scala:51)
	at org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf(SQLTestUtils.scala:246)
	at org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf$(SQLTestUtils.scala:244)
	at org.apache.spark.sql.execution.datasources.orc.OrcTest.withSQLConf(OrcTest.scala:51)
	at org.apache.spark.sql.execution.datasources.orc.OrcQueryTest.$anonfun$new$148(OrcQuerySuite.scala:588)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:151)
	at org.scalatest.FunSuiteLike.invokeWithFixture$1(FunSuiteLike.scala:184)
	at org.scalatest.FunSuiteLike.$anonfun$runTest$1(FunSuiteLike.scala:196)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:286)
	at org.scalatest.FunSuiteLike.runTest(FunSuiteLike.scala:196)
	at org.scalatest.FunSuiteLike.runTest$(FunSuiteLike.scala:178)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:58)
	at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:221)
	at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:214)
	at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:58)
	at org.scalatest.FunSuiteLike.$anonfun$runTests$1(FunSuiteLike.scala:229)
	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:393)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:381)
	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:376)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:458)
	at org.scalatest.FunSuiteLike.runTests(FunSuiteLike.scala:229)
	at org.scalatest.FunSuiteLike.runTests$(FunSuiteLike.scala:228)
	at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
	at org.scalatest.Suite.run(Suite.scala:1124)
	at org.scalatest.Suite.run$(Suite.scala:1106)
	at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
	at org.scalatest.FunSuiteLike.$anonfun$run$1(FunSuiteLike.scala:233)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:518)
	at org.scalatest.FunSuiteLike.run(FunSuiteLike.scala:233)
	at org.scalatest.FunSuiteLike.run$(FunSuiteLike.scala:232)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:58)
	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:58)
	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:317)
	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:510)
	at sbt.ForkMain$Run$2.call(ForkMain.java:296)
	at sbt.ForkMain$Run$2.call(ForkMain.java:286)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
06:04:05.612 WARN org.apache.spark.sql.hive.orc.OrcFileOperator: Skipped the footer in the corrupted file: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/spark-70e75f4a-4236-4cfc-8dd9-18ff17f0e073/first/part-00000-e103fd83-280d-4e95-9382-7b965cec7c50-c000.json
org.apache.orc.FileFormatException: Malformed ORC file file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/spark-70e75f4a-4236-4cfc-8dd9-18ff17f0e073/first/part-00000-e103fd83-280d-4e95-9382-7b965cec7c50-c000.json. Invalid postscript.
	at org.apache.orc.impl.ReaderImpl.ensureOrcFooter(ReaderImpl.java:275)
	at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:582)
	at org.apache.orc.impl.ReaderImpl.<init>(ReaderImpl.java:370)
	at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:63)
	at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:55)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:76)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.getFileReader(OrcFileOperator.scala:87)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$readSchema$1(OrcFileOperator.scala:96)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.readSchema(OrcFileOperator.scala:96)
	at org.apache.spark.sql.hive.orc.OrcFileFormat.$anonfun$buildReader$2(OrcFileFormat.scala:163)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:147)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.internalIter$lzycompute(FileScanRDD.scala:141)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.internalIter(FileScanRDD.scala:141)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.getNext(FileScanRDD.scala:145)
	at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:173)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithoutKey_0$(generated.java:33)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(generated.java:61)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:132)
	at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
06:04:05.626 WARN org.apache.spark.sql.hive.orc.OrcFileOperator: Skipped the footer in the corrupted file: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/spark-70e75f4a-4236-4cfc-8dd9-18ff17f0e073/second/part-00000-b36bdfc0-3bff-49ea-9b2a-8c0eb7c48227-c000.json
org.apache.orc.FileFormatException: Malformed ORC file file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/spark-70e75f4a-4236-4cfc-8dd9-18ff17f0e073/second/part-00000-b36bdfc0-3bff-49ea-9b2a-8c0eb7c48227-c000.json. Invalid postscript.
	at org.apache.orc.impl.ReaderImpl.ensureOrcFooter(ReaderImpl.java:275)
	at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:582)
	at org.apache.orc.impl.ReaderImpl.<init>(ReaderImpl.java:370)
	at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:63)
	at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:55)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:76)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.getFileReader(OrcFileOperator.scala:87)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$readSchema$1(OrcFileOperator.scala:96)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.readSchema(OrcFileOperator.scala:96)
	at org.apache.spark.sql.hive.orc.OrcFileFormat.$anonfun$buildReader$2(OrcFileFormat.scala:163)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:147)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.internalIter$lzycompute(FileScanRDD.scala:141)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.internalIter(FileScanRDD.scala:141)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.getNext(FileScanRDD.scala:145)
	at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:173)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:173)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithoutKey_0$(generated.java:33)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(generated.java:61)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:132)
	at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
06:04:06.595 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 15551.0 (TID 28499)
org.apache.spark.SparkException: Could not read footer for file: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/spark-be2b90ad-4cc1-4295-8bc4-b2cad59a49a3/third/part-00000-9d8266e5-0136-4e53-8f31-1ce0104907c1-c000.json
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:83)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.getFileReader(OrcFileOperator.scala:87)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$readSchema$1(OrcFileOperator.scala:96)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.readSchema(OrcFileOperator.scala:96)
	at org.apache.spark.sql.hive.orc.OrcFileFormat.$anonfun$buildReader$2(OrcFileFormat.scala:163)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:147)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:169)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1804)
	at org.apache.spark.rdd.RDD.$anonfun$count$1(RDD.scala:1227)
	at org.apache.spark.rdd.RDD.$anonfun$count$1$adapted(RDD.scala:1227)
	at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2133)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.orc.FileFormatException: Malformed ORC file file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/spark-be2b90ad-4cc1-4295-8bc4-b2cad59a49a3/third/part-00000-9d8266e5-0136-4e53-8f31-1ce0104907c1-c000.json. Invalid postscript.
	at org.apache.orc.impl.ReaderImpl.ensureOrcFooter(ReaderImpl.java:275)
	at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:582)
	at org.apache.orc.impl.ReaderImpl.<init>(ReaderImpl.java:370)
	at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:63)
	at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:55)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:76)
	... 32 more
06:04:06.605 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 15551.0 (TID 28499, amp-jenkins-worker-04.amp, executor driver): org.apache.spark.SparkException: Could not read footer for file: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/spark-be2b90ad-4cc1-4295-8bc4-b2cad59a49a3/third/part-00000-9d8266e5-0136-4e53-8f31-1ce0104907c1-c000.json
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:83)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.getFileReader(OrcFileOperator.scala:87)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$readSchema$1(OrcFileOperator.scala:96)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.readSchema(OrcFileOperator.scala:96)
	at org.apache.spark.sql.hive.orc.OrcFileFormat.$anonfun$buildReader$2(OrcFileFormat.scala:163)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:147)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:169)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1804)
	at org.apache.spark.rdd.RDD.$anonfun$count$1(RDD.scala:1227)
	at org.apache.spark.rdd.RDD.$anonfun$count$1$adapted(RDD.scala:1227)
	at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2133)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.orc.FileFormatException: Malformed ORC file file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/spark-be2b90ad-4cc1-4295-8bc4-b2cad59a49a3/third/part-00000-9d8266e5-0136-4e53-8f31-1ce0104907c1-c000.json. Invalid postscript.
	at org.apache.orc.impl.ReaderImpl.ensureOrcFooter(ReaderImpl.java:275)
	at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:582)
	at org.apache.orc.impl.ReaderImpl.<init>(ReaderImpl.java:370)
	at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:63)
	at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:55)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:76)
	... 32 more

06:04:06.605 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 15551.0 failed 1 times; aborting job
06:04:07.581 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 15555.0 (TID 28503)
org.apache.spark.SparkException: Could not read footer for file: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/spark-e78eaeb4-eedf-4f60-b242-0a3c62dd2b5e/third/part-00000-c49c5147-e5fb-4afb-af86-73815e521603-c000.json
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:83)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.getFileReader(OrcFileOperator.scala:87)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$readSchema$1(OrcFileOperator.scala:96)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.readSchema(OrcFileOperator.scala:96)
	at org.apache.spark.sql.hive.orc.OrcFileFormat.$anonfun$buildReader$2(OrcFileFormat.scala:163)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:147)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:169)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1804)
	at org.apache.spark.rdd.RDD.$anonfun$count$1(RDD.scala:1227)
	at org.apache.spark.rdd.RDD.$anonfun$count$1$adapted(RDD.scala:1227)
	at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2133)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.orc.FileFormatException: Malformed ORC file file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/spark-e78eaeb4-eedf-4f60-b242-0a3c62dd2b5e/third/part-00000-c49c5147-e5fb-4afb-af86-73815e521603-c000.json. Invalid postscript.
	at org.apache.orc.impl.ReaderImpl.ensureOrcFooter(ReaderImpl.java:275)
	at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:582)
	at org.apache.orc.impl.ReaderImpl.<init>(ReaderImpl.java:370)
	at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:63)
	at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:55)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:76)
	... 32 more
06:04:07.584 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 15555.0 (TID 28503, amp-jenkins-worker-04.amp, executor driver): org.apache.spark.SparkException: Could not read footer for file: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/spark-e78eaeb4-eedf-4f60-b242-0a3c62dd2b5e/third/part-00000-c49c5147-e5fb-4afb-af86-73815e521603-c000.json
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:83)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.getFileReader(OrcFileOperator.scala:87)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$readSchema$1(OrcFileOperator.scala:96)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.readSchema(OrcFileOperator.scala:96)
	at org.apache.spark.sql.hive.orc.OrcFileFormat.$anonfun$buildReader$2(OrcFileFormat.scala:163)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:147)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:169)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1804)
	at org.apache.spark.rdd.RDD.$anonfun$count$1(RDD.scala:1227)
	at org.apache.spark.rdd.RDD.$anonfun$count$1$adapted(RDD.scala:1227)
	at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2133)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.orc.FileFormatException: Malformed ORC file file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/spark-e78eaeb4-eedf-4f60-b242-0a3c62dd2b5e/third/part-00000-c49c5147-e5fb-4afb-af86-73815e521603-c000.json. Invalid postscript.
	at org.apache.orc.impl.ReaderImpl.ensureOrcFooter(ReaderImpl.java:275)
	at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:582)
	at org.apache.orc.impl.ReaderImpl.<init>(ReaderImpl.java:370)
	at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:63)
	at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:55)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:76)
	... 32 more

06:04:07.584 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 15555.0 failed 1 times; aborting job
06:04:08.815 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 15560.0 (TID 28508)
org.apache.spark.SparkException: Could not read footer for file: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/spark-f22d2353-539f-4d16-88b1-25bd4df7f416/first/part-00000-e85c05ce-8046-448e-ac31-5a819cee8a4c-c000.json
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:83)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.getFileReader(OrcFileOperator.scala:87)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$readSchema$1(OrcFileOperator.scala:96)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.readSchema(OrcFileOperator.scala:96)
	at org.apache.spark.sql.hive.orc.OrcFileFormat.$anonfun$buildReader$2(OrcFileFormat.scala:163)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:147)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:169)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithoutKey_0$(generated.java:33)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(generated.java:61)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:132)
	at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.orc.FileFormatException: Malformed ORC file file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/spark-f22d2353-539f-4d16-88b1-25bd4df7f416/first/part-00000-e85c05ce-8046-448e-ac31-5a819cee8a4c-c000.json. Invalid postscript.
	at org.apache.orc.impl.ReaderImpl.ensureOrcFooter(ReaderImpl.java:275)
	at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:582)
	at org.apache.orc.impl.ReaderImpl.<init>(ReaderImpl.java:370)
	at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:63)
	at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:55)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:76)
	... 34 more
06:04:08.818 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 15560.0 (TID 28508, amp-jenkins-worker-04.amp, executor driver): org.apache.spark.SparkException: Could not read footer for file: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/spark-f22d2353-539f-4d16-88b1-25bd4df7f416/first/part-00000-e85c05ce-8046-448e-ac31-5a819cee8a4c-c000.json
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:83)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.getFileReader(OrcFileOperator.scala:87)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$readSchema$1(OrcFileOperator.scala:96)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.readSchema(OrcFileOperator.scala:96)
	at org.apache.spark.sql.hive.orc.OrcFileFormat.$anonfun$buildReader$2(OrcFileFormat.scala:163)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:147)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:169)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithoutKey_0$(generated.java:33)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(generated.java:61)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:132)
	at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.orc.FileFormatException: Malformed ORC file file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/spark-f22d2353-539f-4d16-88b1-25bd4df7f416/first/part-00000-e85c05ce-8046-448e-ac31-5a819cee8a4c-c000.json. Invalid postscript.
	at org.apache.orc.impl.ReaderImpl.ensureOrcFooter(ReaderImpl.java:275)
	at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:582)
	at org.apache.orc.impl.ReaderImpl.<init>(ReaderImpl.java:370)
	at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:63)
	at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:55)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:76)
	... 34 more

06:04:08.819 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 15560.0 failed 1 times; aborting job
[info] - Enabling/disabling ignoreCorruptFiles (6 seconds, 630 milliseconds)
[info] - SPARK-27160 Predicate pushdown correctness on DecimalType for ORC (1 second, 193 milliseconds)
[info] - SPARK-8501: Avoids discovery schema from empty ORC files (1 second, 47 milliseconds)
06:04:12.707 ERROR org.apache.hadoop.hive.ql.io.AcidUtils: Failed to get files with ID; using regular API: Only supported for DFS; got class org.apache.hadoop.hive.ql.io.ProxyLocalFileSystem
06:04:12.789 ERROR org.apache.hadoop.hive.ql.io.AcidUtils: Failed to get files with ID; using regular API: Only supported for DFS; got class org.apache.hadoop.hive.ql.io.ProxyLocalFileSystem
[info] - Verify the ORC conversion parameter: CONVERT_METASTORE_ORC (1 second, 859 milliseconds)
[info] - converted ORC table supports resolving mixed case field (1 second, 646 milliseconds)
[info] - SPARK-20728 Make ORCFileFormat configurable between sql/hive and sql/core (514 milliseconds)
[info] - SPARK-22267 Spark SQL incorrectly reads ORC files when column order is different (1 second, 425 milliseconds)
[info] - SPARK-19809 NullPointerException on zero-size ORC file (623 milliseconds)
[info] - SPARK-23340 Empty float/double array columns raise EOFException !!! IGNORED !!!
06:04:17.188 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/spark_26437 specified for non-external table:spark_26437
06:04:17.908 ERROR org.apache.hadoop.hive.ql.io.AcidUtils: Failed to get files with ID; using regular API: Only supported for DFS; got class org.apache.hadoop.hive.ql.io.ProxyLocalFileSystem
06:04:18.012 ERROR org.apache.hadoop.hive.ql.io.AcidUtils: Failed to get files with ID; using regular API: Only supported for DFS; got class org.apache.hadoop.hive.ql.io.ProxyLocalFileSystem
[info] - SPARK-26437 Can not query decimal type when value is 0 (988 milliseconds)
06:04:18.187 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/dummy_orc_partitioned specified for non-external table:dummy_orc_partitioned
06:04:19.323 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@3/target/tmp/warehouse-f97f345b-ee1a-47dc-a2d4-0ffcfebcd684/dummy_orc_partitioned specified for non-external table:dummy_orc_partitioned
[info] - SPARK-28573 ORC conversation could be applied for partitioned table insertion (2 seconds, 587 milliseconds)
06:04:20.844 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:04:20.844 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:04:20.844 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
06:04:20.943 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:04:20.943 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:04:20.943 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
[info] Test run started
[info] Test org.apache.spark.sql.hive.JavaMetastoreDataSourcesSuite.saveTableAndQueryIt started
06:04:21.547 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider org.apache.spark.sql.json. Persisting data source table `default`.`javasavedtable` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.857s
[info] Test run started
[info] Test org.apache.spark.sql.hive.JavaDataFrameSuite.testUDAF started
[info] Test org.apache.spark.sql.hive.JavaDataFrameSuite.saveTableAndQueryIt started
[info] Test run finished: 0 failed, 0 ignored, 2 total, 2.188s
[info] ScalaTest
[info] Run completed in 2 hours, 47 minutes, 36 seconds.
[info] Total number of tests run: 3637
[info] Suites: completed 131, aborted 0
[info] Tests: succeeded 3637, failed 0, canceled 0, ignored 598, pending 0
[info] All tests passed.
[info] Passed: Total 3640, Failed 0, Errors 0, Passed 3640, Ignored 598
[error] (streaming-kafka-0-10/test:test) sbt.TestsFailedException: Tests unsuccessful
[error] Total time: 10143 s, completed May 15, 2020 6:05:00 AM
[error] running /home/jenkins/workspace/NewSparkPullRequestBuilder@3/build/sbt -Phadoop-2.7 -Phive-2.3 -Pkubernetes -Pyarn -Phive-thriftserver -Pspark-ganglia-lgpl -Phadoop-cloud -Phive -Pmesos -Pkinesis-asl -Dtest.exclude.tags=org.apache.spark.tags.ExtendedYarnTest test ; received return code 1
Attempting to post to Github...
 > Post successful.
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Finished: FAILURE