Console Output

Skipping 25,151 KB.. Full Log
21:20:55.396 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
21:20:55.396 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
21:20:55.470 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
21:20:55.470 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
21:20:55.470 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
[info] HiveHadoopDelegationTokenManagerSuite:
[info] - default configuration (31 milliseconds)
21:20:55.542 WARN org.apache.spark.deploy.security.HadoopDelegationTokenManager: spark.yarn.security.credentials.hive.enabled is deprecated.  Please use spark.security.credentials.hive.enabled instead.
[info] - using deprecated configurations (2 milliseconds)
[info] - SPARK-23209: obtain tokens when Hive classes are not available (562 milliseconds)
[info] JsonHadoopFsRelationSuite:
[info] - test all data types (21 seconds, 355 milliseconds)
[info] - save()/load() - non-partitioned table - Overwrite (944 milliseconds)
[info] - save()/load() - non-partitioned table - Append (1 second, 240 milliseconds)
[info] - save()/load() - non-partitioned table - ErrorIfExists (56 milliseconds)
[info] - save()/load() - non-partitioned table - Ignore (65 milliseconds)
[info] - save()/load() - partitioned table - simple queries (2 seconds, 190 milliseconds)
[info] - save()/load() - partitioned table - Overwrite (1 second, 532 milliseconds)
[info] - save()/load() - partitioned table - Append (1 second, 552 milliseconds)
[info] - save()/load() - partitioned table - Append - new partition values (935 milliseconds)
[info] - save()/load() - partitioned table - ErrorIfExists (70 milliseconds)
[info] - save()/load() - partitioned table - Ignore (69 milliseconds)
21:21:26.430 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`t` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - saveAsTable()/load() - non-partitioned table - Overwrite (483 milliseconds)
21:21:26.879 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`t` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - saveAsTable()/load() - non-partitioned table - Append (1 second, 14 milliseconds)
21:21:27.683 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`t` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - saveAsTable()/load() - non-partitioned table - ErrorIfExists (238 milliseconds)
21:21:27.908 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`t` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - saveAsTable()/load() - non-partitioned table - Ignore (324 milliseconds)
21:21:28.574 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`t` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - saveAsTable()/load() - partitioned table - simple queries (1 second, 648 milliseconds)
[info] - saveAsTable()/load() - partitioned table - boolean type (1 second, 653 milliseconds)
21:21:32.093 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`t` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
21:21:33.029 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`t` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - saveAsTable()/load() - partitioned table - Overwrite (2 seconds, 253 milliseconds)
21:21:34.262 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`t` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - saveAsTable()/load() - partitioned table - Append (1 second, 825 milliseconds)
21:21:35.972 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`t` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - saveAsTable()/load() - partitioned table - Append - new partition values (1 second, 692 milliseconds)
21:21:37.650 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`t` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - saveAsTable()/load() - partitioned table - Append - mismatched partition columns (568 milliseconds)
[info] - saveAsTable()/load() - partitioned table - ErrorIfExists (29 milliseconds)
[info] - saveAsTable()/load() - partitioned table - Ignore (39 milliseconds)
[info] - load() - with directory of unpartitioned data in nested subdirs (696 milliseconds)
[info] - Hadoop style globbing - unpartitioned data (1 second, 558 milliseconds)
[info] - Hadoop style globbing - partitioned data with schema inference (2 seconds, 432 milliseconds)
21:21:43.283 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`t` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - SPARK-9735 Partition column type casting (1 second, 647 milliseconds)
21:21:44.881 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`t` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - SPARK-7616: adjust column name order accordingly when saving partitioned table (1 second, 225 milliseconds)
[info] - SPARK-8887: Explicitly define which data types can be used as dynamic partition columns (155 milliseconds)
[info] - Locality support for FileScanRDD (480 milliseconds)
[info] - SPARK-16975: Partitioned table with the column having '_' should be read correctly (1 second, 377 milliseconds)
[info] - save()/load() - partitioned table - simple queries - partition columns in data (1 second, 672 milliseconds)
[info] - SPARK-9894: save complex types to JSON (564 milliseconds)
[info] - SPARK-10196: save decimal type to JSON (508 milliseconds)
21:21:50.374 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
21:21:50.374 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
21:21:50.374 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
21:21:50.474 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
21:21:50.475 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
21:21:50.475 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
[info] HiveOrcQuerySuite:
[info] - Read/write All Types (825 milliseconds)
[info] - Read/write binary data (387 milliseconds)
21:21:51.992 WARN org.apache.spark.scheduler.TaskSetManager: Stage 12459 contains a task of very large size (1267 KiB). The maximum recommended task size is 1000 KiB.
[info] - Read/write all types with non-primitive type (2 seconds, 707 milliseconds)
[info] - Read/write UserDefinedType (528 milliseconds)
[info] - Creating case class RDD table (189 milliseconds)
[info] - Simple selection form ORC table (2 seconds, 75 milliseconds)
[info] - save and load case class RDD with `None`s as orc (447 milliseconds)
[info] - SPARK-16610: Respect orc.compress (i.e., OrcConf.COMPRESS) when compression is unset (639 milliseconds)
[info] - Compression options for writing to an ORC file (SNAPPY, ZLIB and NONE) (917 milliseconds)
[info] - simple select queries (1 second, 77 milliseconds)
[info] - appending (882 milliseconds)
[info] - overwriting (1 second, 85 milliseconds)
[info] - self-join (1 second, 223 milliseconds)
[info] - nested data - struct with array field (701 milliseconds)
[info] - nested data - array of struct (700 milliseconds)
[info] - columns only referenced by pushed down filters should remain (657 milliseconds)
[info] - SPARK-5309 strings stored using dictionary compression in orc (2 seconds, 346 milliseconds)
[info] - SPARK-9170: Don't implicitly lowercase of user-provided columns (933 milliseconds)
[info] - SPARK-10623 Enable ORC PPD (6 seconds, 864 milliseconds)
[info] - SPARK-14962 Produce correct results on array type with isnotnull (576 milliseconds)
[info] - SPARK-15198 Support for pushing down filters for boolean types (455 milliseconds)
[info] - Support for pushing down filters for decimal types (1 second, 529 milliseconds)
[info] - Support for pushing down filters for timestamp types (1 second, 597 milliseconds)
[info] - column nullability and comment - write and then read (675 milliseconds)
[info] - Empty schema does not read data from ORC file (312 milliseconds)
[info] - read from multiple orc input paths (637 milliseconds)
21:22:22.424 WARN org.apache.spark.sql.hive.orc.OrcFileOperator: Skipped the footer in the corrupted file: file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-deeb95fc-8670-4685-ad26-338860116d69/third/part-00000-a3c42724-af5f-4c21-a1fb-949829fdd636-c000.json
org.apache.orc.FileFormatException: Malformed ORC file file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-deeb95fc-8670-4685-ad26-338860116d69/third/part-00000-a3c42724-af5f-4c21-a1fb-949829fdd636-c000.json. Invalid postscript.
	at org.apache.orc.impl.ReaderImpl.ensureOrcFooter(ReaderImpl.java:275)
	at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:582)
	at org.apache.orc.impl.ReaderImpl.<init>(ReaderImpl.java:370)
	at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:63)
	at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:55)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:76)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.getFileReader(OrcFileOperator.scala:87)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$readSchema$1(OrcFileOperator.scala:96)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.readSchema(OrcFileOperator.scala:96)
	at org.apache.spark.sql.hive.orc.OrcFileFormat.$anonfun$buildReader$2(OrcFileFormat.scala:163)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:147)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.internalIter$lzycompute(FileScanRDD.scala:141)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.internalIter(FileScanRDD.scala:141)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.getNext(FileScanRDD.scala:145)
	at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:173)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1804)
	at org.apache.spark.rdd.RDD.$anonfun$count$1(RDD.scala:1229)
	at org.apache.spark.rdd.RDD.$anonfun$count$1$adapted(RDD.scala:1229)
	at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2144)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:460)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:463)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
21:22:22.505 WARN org.apache.spark.sql.hive.orc.OrcFileOperator: Skipped the footer in the corrupted file: file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-deeb95fc-8670-4685-ad26-338860116d69/third/part-00000-a3c42724-af5f-4c21-a1fb-949829fdd636-c000.json
org.apache.orc.FileFormatException: Malformed ORC file file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-deeb95fc-8670-4685-ad26-338860116d69/third/part-00000-a3c42724-af5f-4c21-a1fb-949829fdd636-c000.json. Invalid postscript.
	at org.apache.orc.impl.ReaderImpl.ensureOrcFooter(ReaderImpl.java:275)
	at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:582)
	at org.apache.orc.impl.ReaderImpl.<init>(ReaderImpl.java:370)
	at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:63)
	at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:55)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:76)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.getFileReader(OrcFileOperator.scala:87)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$readSchema$1(OrcFileOperator.scala:96)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.readSchema(OrcFileOperator.scala:96)
	at org.apache.spark.sql.hive.orc.OrcFileFormat.$anonfun$buildReader$2(OrcFileFormat.scala:163)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:147)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.internalIter$lzycompute(FileScanRDD.scala:141)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.internalIter(FileScanRDD.scala:141)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.getNext(FileScanRDD.scala:145)
	at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:173)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:340)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:874)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:874)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:460)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:463)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
21:22:23.416 WARN org.apache.spark.sql.hive.orc.OrcFileOperator: Skipped the footer in the corrupted file: file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-4970f64a-4ac5-45f3-bf58-7d63c86aa41e/third/part-00000-29fcad82-663f-4737-b761-cb25d2bccb82-c000.json
org.apache.orc.FileFormatException: Malformed ORC file file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-4970f64a-4ac5-45f3-bf58-7d63c86aa41e/third/part-00000-29fcad82-663f-4737-b761-cb25d2bccb82-c000.json. Invalid postscript.
	at org.apache.orc.impl.ReaderImpl.ensureOrcFooter(ReaderImpl.java:275)
	at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:582)
	at org.apache.orc.impl.ReaderImpl.<init>(ReaderImpl.java:370)
	at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:63)
	at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:55)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:76)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.getFileReader(OrcFileOperator.scala:87)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$readSchema$1(OrcFileOperator.scala:96)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.readSchema(OrcFileOperator.scala:96)
	at org.apache.spark.sql.hive.orc.OrcFileFormat.$anonfun$buildReader$2(OrcFileFormat.scala:163)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:147)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.internalIter$lzycompute(FileScanRDD.scala:141)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.internalIter(FileScanRDD.scala:141)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.getNext(FileScanRDD.scala:145)
	at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:173)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1804)
	at org.apache.spark.rdd.RDD.$anonfun$count$1(RDD.scala:1229)
	at org.apache.spark.rdd.RDD.$anonfun$count$1$adapted(RDD.scala:1229)
	at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2144)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:460)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:463)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
21:22:23.486 WARN org.apache.spark.sql.hive.orc.OrcFileOperator: Skipped the footer in the corrupted file: file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-4970f64a-4ac5-45f3-bf58-7d63c86aa41e/third/part-00000-29fcad82-663f-4737-b761-cb25d2bccb82-c000.json
org.apache.orc.FileFormatException: Malformed ORC file file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-4970f64a-4ac5-45f3-bf58-7d63c86aa41e/third/part-00000-29fcad82-663f-4737-b761-cb25d2bccb82-c000.json. Invalid postscript.
	at org.apache.orc.impl.ReaderImpl.ensureOrcFooter(ReaderImpl.java:275)
	at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:582)
	at org.apache.orc.impl.ReaderImpl.<init>(ReaderImpl.java:370)
	at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:63)
	at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:55)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:76)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.getFileReader(OrcFileOperator.scala:87)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$readSchema$1(OrcFileOperator.scala:96)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.readSchema(OrcFileOperator.scala:96)
	at org.apache.spark.sql.hive.orc.OrcFileFormat.$anonfun$buildReader$2(OrcFileFormat.scala:163)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:147)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.internalIter$lzycompute(FileScanRDD.scala:141)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.internalIter(FileScanRDD.scala:141)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.getNext(FileScanRDD.scala:145)
	at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:173)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:340)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:874)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:874)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:460)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:463)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
21:22:24.050 WARN org.apache.spark.sql.hive.orc.OrcFileOperator: Skipped the footer in the corrupted file: file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-467d8c26-75c6-47d8-8146-c4d31ba92c9c/first/part-00000-19ac1afe-4cce-44dd-810d-81e6af405381-c000.json
org.apache.orc.FileFormatException: Malformed ORC file file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-467d8c26-75c6-47d8-8146-c4d31ba92c9c/first/part-00000-19ac1afe-4cce-44dd-810d-81e6af405381-c000.json. Invalid postscript.
	at org.apache.orc.impl.ReaderImpl.ensureOrcFooter(ReaderImpl.java:275)
	at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:582)
	at org.apache.orc.impl.ReaderImpl.<init>(ReaderImpl.java:370)
	at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:63)
	at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:55)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:76)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.getFileReader(OrcFileOperator.scala:87)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$readSchema$1(OrcFileOperator.scala:96)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.readSchema(OrcFileOperator.scala:96)
	at org.apache.spark.sql.hive.orc.OrcFileFormat.inferSchema(OrcFileFormat.scala:81)
	at org.apache.spark.sql.execution.datasources.DataSource.$anonfun$getOrInferFileFormatSchema$11(DataSource.scala:194)
	at scala.Option.orElse(Option.scala:447)
	at org.apache.spark.sql.execution.datasources.DataSource.getOrInferFileFormatSchema(DataSource.scala:191)
	at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:402)
	at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:246)
	at org.apache.spark.sql.DataFrameReader.$anonfun$load$2(DataFrameReader.scala:235)
	at scala.Option.getOrElse(Option.scala:189)
	at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:235)
	at org.apache.spark.sql.DataFrameReader.orc(DataFrameReader.scala:729)
	at org.apache.spark.sql.execution.datasources.orc.OrcQueryTest.$anonfun$new$153(OrcQuerySuite.scala:570)
	at org.apache.spark.sql.execution.datasources.orc.OrcQueryTest.$anonfun$new$153$adapted(OrcQuerySuite.scala:564)
	at org.apache.spark.sql.test.SQLTestUtils.$anonfun$withTempDir$1(SQLTestUtils.scala:76)
	at org.apache.spark.sql.test.SQLTestUtils.$anonfun$withTempDir$1$adapted(SQLTestUtils.scala:75)
	at org.apache.spark.SparkFunSuite.withTempDir(SparkFunSuite.scala:163)
	at org.apache.spark.sql.execution.datasources.orc.OrcTest.org$apache$spark$sql$test$SQLTestUtils$$super$withTempDir(OrcTest.scala:50)
	at org.apache.spark.sql.test.SQLTestUtils.withTempDir(SQLTestUtils.scala:75)
	at org.apache.spark.sql.test.SQLTestUtils.withTempDir$(SQLTestUtils.scala:74)
	at org.apache.spark.sql.execution.datasources.orc.OrcTest.withTempDir(OrcTest.scala:50)
	at org.apache.spark.sql.execution.datasources.orc.OrcQueryTest.testAllCorruptFiles$1(OrcQuerySuite.scala:564)
	at org.apache.spark.sql.execution.datasources.orc.OrcQueryTest.$anonfun$new$156(OrcQuerySuite.scala:591)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.scalatest.Assertions.intercept(Assertions.scala:807)
	at org.scalatest.Assertions.intercept$(Assertions.scala:804)
	at org.scalatest.FunSuite.intercept(FunSuite.scala:1560)
	at org.apache.spark.sql.execution.datasources.orc.OrcQueryTest.$anonfun$new$155(OrcQuerySuite.scala:590)
	at org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf(SQLHelper.scala:52)
	at org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf$(SQLHelper.scala:36)
	at org.apache.spark.sql.execution.datasources.orc.OrcTest.org$apache$spark$sql$test$SQLTestUtilsBase$$super$withSQLConf(OrcTest.scala:50)
	at org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf(SQLTestUtils.scala:231)
	at org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf$(SQLTestUtils.scala:229)
	at org.apache.spark.sql.execution.datasources.orc.OrcTest.withSQLConf(OrcTest.scala:50)
	at org.apache.spark.sql.execution.datasources.orc.OrcQueryTest.$anonfun$new$148(OrcQuerySuite.scala:587)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:151)
	at org.scalatest.FunSuiteLike.invokeWithFixture$1(FunSuiteLike.scala:184)
	at org.scalatest.FunSuiteLike.$anonfun$runTest$1(FunSuiteLike.scala:196)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:286)
	at org.scalatest.FunSuiteLike.runTest(FunSuiteLike.scala:196)
	at org.scalatest.FunSuiteLike.runTest$(FunSuiteLike.scala:178)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:58)
	at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:221)
	at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:214)
	at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:58)
	at org.scalatest.FunSuiteLike.$anonfun$runTests$1(FunSuiteLike.scala:229)
	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:393)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:381)
	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:376)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:458)
	at org.scalatest.FunSuiteLike.runTests(FunSuiteLike.scala:229)
	at org.scalatest.FunSuiteLike.runTests$(FunSuiteLike.scala:228)
	at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
	at org.scalatest.Suite.run(Suite.scala:1124)
	at org.scalatest.Suite.run$(Suite.scala:1106)
	at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
	at org.scalatest.FunSuiteLike.$anonfun$run$1(FunSuiteLike.scala:233)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:518)
	at org.scalatest.FunSuiteLike.run(FunSuiteLike.scala:233)
	at org.scalatest.FunSuiteLike.run$(FunSuiteLike.scala:232)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:58)
	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:58)
	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:317)
	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:510)
	at sbt.ForkMain$Run$2.call(ForkMain.java:296)
	at sbt.ForkMain$Run$2.call(ForkMain.java:286)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
21:22:24.064 WARN org.apache.spark.sql.hive.orc.OrcFileOperator: Skipped the footer in the corrupted file: file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-467d8c26-75c6-47d8-8146-c4d31ba92c9c/second/part-00000-c8f478c7-d1ba-419d-820e-16bc99ad7618-c000.json
org.apache.orc.FileFormatException: Malformed ORC file file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-467d8c26-75c6-47d8-8146-c4d31ba92c9c/second/part-00000-c8f478c7-d1ba-419d-820e-16bc99ad7618-c000.json. Invalid postscript.
	at org.apache.orc.impl.ReaderImpl.ensureOrcFooter(ReaderImpl.java:275)
	at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:582)
	at org.apache.orc.impl.ReaderImpl.<init>(ReaderImpl.java:370)
	at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:63)
	at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:55)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:76)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.getFileReader(OrcFileOperator.scala:87)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$readSchema$1(OrcFileOperator.scala:96)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.readSchema(OrcFileOperator.scala:96)
	at org.apache.spark.sql.hive.orc.OrcFileFormat.inferSchema(OrcFileFormat.scala:81)
	at org.apache.spark.sql.execution.datasources.DataSource.$anonfun$getOrInferFileFormatSchema$11(DataSource.scala:194)
	at scala.Option.orElse(Option.scala:447)
	at org.apache.spark.sql.execution.datasources.DataSource.getOrInferFileFormatSchema(DataSource.scala:191)
	at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:402)
	at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:246)
	at org.apache.spark.sql.DataFrameReader.$anonfun$load$2(DataFrameReader.scala:235)
	at scala.Option.getOrElse(Option.scala:189)
	at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:235)
	at org.apache.spark.sql.DataFrameReader.orc(DataFrameReader.scala:729)
	at org.apache.spark.sql.execution.datasources.orc.OrcQueryTest.$anonfun$new$153(OrcQuerySuite.scala:570)
	at org.apache.spark.sql.execution.datasources.orc.OrcQueryTest.$anonfun$new$153$adapted(OrcQuerySuite.scala:564)
	at org.apache.spark.sql.test.SQLTestUtils.$anonfun$withTempDir$1(SQLTestUtils.scala:76)
	at org.apache.spark.sql.test.SQLTestUtils.$anonfun$withTempDir$1$adapted(SQLTestUtils.scala:75)
	at org.apache.spark.SparkFunSuite.withTempDir(SparkFunSuite.scala:163)
	at org.apache.spark.sql.execution.datasources.orc.OrcTest.org$apache$spark$sql$test$SQLTestUtils$$super$withTempDir(OrcTest.scala:50)
	at org.apache.spark.sql.test.SQLTestUtils.withTempDir(SQLTestUtils.scala:75)
	at org.apache.spark.sql.test.SQLTestUtils.withTempDir$(SQLTestUtils.scala:74)
	at org.apache.spark.sql.execution.datasources.orc.OrcTest.withTempDir(OrcTest.scala:50)
	at org.apache.spark.sql.execution.datasources.orc.OrcQueryTest.testAllCorruptFiles$1(OrcQuerySuite.scala:564)
	at org.apache.spark.sql.execution.datasources.orc.OrcQueryTest.$anonfun$new$156(OrcQuerySuite.scala:591)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.scalatest.Assertions.intercept(Assertions.scala:807)
	at org.scalatest.Assertions.intercept$(Assertions.scala:804)
	at org.scalatest.FunSuite.intercept(FunSuite.scala:1560)
	at org.apache.spark.sql.execution.datasources.orc.OrcQueryTest.$anonfun$new$155(OrcQuerySuite.scala:590)
	at org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf(SQLHelper.scala:52)
	at org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf$(SQLHelper.scala:36)
	at org.apache.spark.sql.execution.datasources.orc.OrcTest.org$apache$spark$sql$test$SQLTestUtilsBase$$super$withSQLConf(OrcTest.scala:50)
	at org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf(SQLTestUtils.scala:231)
	at org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf$(SQLTestUtils.scala:229)
	at org.apache.spark.sql.execution.datasources.orc.OrcTest.withSQLConf(OrcTest.scala:50)
	at org.apache.spark.sql.execution.datasources.orc.OrcQueryTest.$anonfun$new$148(OrcQuerySuite.scala:587)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:151)
	at org.scalatest.FunSuiteLike.invokeWithFixture$1(FunSuiteLike.scala:184)
	at org.scalatest.FunSuiteLike.$anonfun$runTest$1(FunSuiteLike.scala:196)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:286)
	at org.scalatest.FunSuiteLike.runTest(FunSuiteLike.scala:196)
	at org.scalatest.FunSuiteLike.runTest$(FunSuiteLike.scala:178)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:58)
	at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:221)
	at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:214)
	at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:58)
	at org.scalatest.FunSuiteLike.$anonfun$runTests$1(FunSuiteLike.scala:229)
	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:393)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:381)
	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:376)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:458)
	at org.scalatest.FunSuiteLike.runTests(FunSuiteLike.scala:229)
	at org.scalatest.FunSuiteLike.runTests$(FunSuiteLike.scala:228)
	at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
	at org.scalatest.Suite.run(Suite.scala:1124)
	at org.scalatest.Suite.run$(Suite.scala:1106)
	at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
	at org.scalatest.FunSuiteLike.$anonfun$run$1(FunSuiteLike.scala:233)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:518)
	at org.scalatest.FunSuiteLike.run(FunSuiteLike.scala:233)
	at org.scalatest.FunSuiteLike.run$(FunSuiteLike.scala:232)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:58)
	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:58)
	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:317)
	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:510)
	at sbt.ForkMain$Run$2.call(ForkMain.java:296)
	at sbt.ForkMain$Run$2.call(ForkMain.java:286)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
21:22:24.681 WARN org.apache.spark.sql.hive.orc.OrcFileOperator: Skipped the footer in the corrupted file: file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-0c7a3348-c9ea-41b0-95ca-6dc473e02a46/first/part-00000-1c878272-51b2-4610-886b-b5d92ea04c6f-c000.json
org.apache.orc.FileFormatException: Malformed ORC file file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-0c7a3348-c9ea-41b0-95ca-6dc473e02a46/first/part-00000-1c878272-51b2-4610-886b-b5d92ea04c6f-c000.json. Invalid postscript.
	at org.apache.orc.impl.ReaderImpl.ensureOrcFooter(ReaderImpl.java:275)
	at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:582)
	at org.apache.orc.impl.ReaderImpl.<init>(ReaderImpl.java:370)
	at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:63)
	at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:55)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:76)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.getFileReader(OrcFileOperator.scala:87)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$readSchema$1(OrcFileOperator.scala:96)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.readSchema(OrcFileOperator.scala:96)
	at org.apache.spark.sql.hive.orc.OrcFileFormat.$anonfun$buildReader$2(OrcFileFormat.scala:163)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:147)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.internalIter$lzycompute(FileScanRDD.scala:141)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.internalIter(FileScanRDD.scala:141)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.getNext(FileScanRDD.scala:145)
	at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:173)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithoutKey_0$(generated.java:33)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(generated.java:63)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:132)
	at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:460)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:463)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
21:22:24.695 WARN org.apache.spark.sql.hive.orc.OrcFileOperator: Skipped the footer in the corrupted file: file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-0c7a3348-c9ea-41b0-95ca-6dc473e02a46/second/part-00000-420b2366-5399-4b59-a6ad-cdb7d7a5c0ed-c000.json
org.apache.orc.FileFormatException: Malformed ORC file file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-0c7a3348-c9ea-41b0-95ca-6dc473e02a46/second/part-00000-420b2366-5399-4b59-a6ad-cdb7d7a5c0ed-c000.json. Invalid postscript.
	at org.apache.orc.impl.ReaderImpl.ensureOrcFooter(ReaderImpl.java:275)
	at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:582)
	at org.apache.orc.impl.ReaderImpl.<init>(ReaderImpl.java:370)
	at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:63)
	at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:55)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:76)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.getFileReader(OrcFileOperator.scala:87)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$readSchema$1(OrcFileOperator.scala:96)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.readSchema(OrcFileOperator.scala:96)
	at org.apache.spark.sql.hive.orc.OrcFileFormat.$anonfun$buildReader$2(OrcFileFormat.scala:163)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:147)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.internalIter$lzycompute(FileScanRDD.scala:141)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.internalIter(FileScanRDD.scala:141)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1$$anon$2.getNext(FileScanRDD.scala:145)
	at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:173)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:173)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithoutKey_0$(generated.java:33)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(generated.java:63)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:132)
	at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:460)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:463)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
21:22:25.689 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 12647.0 (TID 23478)
org.apache.spark.SparkException: Could not read footer for file: file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-d2ec5817-23a2-4d1b-af96-60f781d87998/third/part-00000-7b184a42-251e-42a3-9e9e-796de0d44224-c000.json
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:83)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.getFileReader(OrcFileOperator.scala:87)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$readSchema$1(OrcFileOperator.scala:96)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.readSchema(OrcFileOperator.scala:96)
	at org.apache.spark.sql.hive.orc.OrcFileFormat.$anonfun$buildReader$2(OrcFileFormat.scala:163)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:147)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:169)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1804)
	at org.apache.spark.rdd.RDD.$anonfun$count$1(RDD.scala:1229)
	at org.apache.spark.rdd.RDD.$anonfun$count$1$adapted(RDD.scala:1229)
	at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2144)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:460)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:463)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.orc.FileFormatException: Malformed ORC file file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-d2ec5817-23a2-4d1b-af96-60f781d87998/third/part-00000-7b184a42-251e-42a3-9e9e-796de0d44224-c000.json. Invalid postscript.
	at org.apache.orc.impl.ReaderImpl.ensureOrcFooter(ReaderImpl.java:275)
	at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:582)
	at org.apache.orc.impl.ReaderImpl.<init>(ReaderImpl.java:370)
	at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:63)
	at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:55)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:76)
	... 32 more
21:22:25.693 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 12647.0 (TID 23478, amp-jenkins-worker-06.amp, executor driver): org.apache.spark.SparkException: Could not read footer for file: file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-d2ec5817-23a2-4d1b-af96-60f781d87998/third/part-00000-7b184a42-251e-42a3-9e9e-796de0d44224-c000.json
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:83)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.getFileReader(OrcFileOperator.scala:87)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$readSchema$1(OrcFileOperator.scala:96)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.readSchema(OrcFileOperator.scala:96)
	at org.apache.spark.sql.hive.orc.OrcFileFormat.$anonfun$buildReader$2(OrcFileFormat.scala:163)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:147)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:169)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1804)
	at org.apache.spark.rdd.RDD.$anonfun$count$1(RDD.scala:1229)
	at org.apache.spark.rdd.RDD.$anonfun$count$1$adapted(RDD.scala:1229)
	at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2144)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:460)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:463)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.orc.FileFormatException: Malformed ORC file file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-d2ec5817-23a2-4d1b-af96-60f781d87998/third/part-00000-7b184a42-251e-42a3-9e9e-796de0d44224-c000.json. Invalid postscript.
	at org.apache.orc.impl.ReaderImpl.ensureOrcFooter(ReaderImpl.java:275)
	at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:582)
	at org.apache.orc.impl.ReaderImpl.<init>(ReaderImpl.java:370)
	at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:63)
	at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:55)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:76)
	... 32 more

21:22:25.693 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 12647.0 failed 1 times; aborting job
21:22:26.609 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 12651.0 (TID 23482)
org.apache.spark.SparkException: Could not read footer for file: file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-1fea2310-fc34-453c-9de8-993d62357f7f/third/part-00000-99708f66-1727-4742-a45c-0431b5b0bf96-c000.json
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:83)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.getFileReader(OrcFileOperator.scala:87)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$readSchema$1(OrcFileOperator.scala:96)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.readSchema(OrcFileOperator.scala:96)
	at org.apache.spark.sql.hive.orc.OrcFileFormat.$anonfun$buildReader$2(OrcFileFormat.scala:163)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:147)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:169)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1804)
	at org.apache.spark.rdd.RDD.$anonfun$count$1(RDD.scala:1229)
	at org.apache.spark.rdd.RDD.$anonfun$count$1$adapted(RDD.scala:1229)
	at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2144)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:460)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:463)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.orc.FileFormatException: Malformed ORC file file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-1fea2310-fc34-453c-9de8-993d62357f7f/third/part-00000-99708f66-1727-4742-a45c-0431b5b0bf96-c000.json. Invalid postscript.
	at org.apache.orc.impl.ReaderImpl.ensureOrcFooter(ReaderImpl.java:275)
	at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:582)
	at org.apache.orc.impl.ReaderImpl.<init>(ReaderImpl.java:370)
	at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:63)
	at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:55)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:76)
	... 32 more
21:22:26.613 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 12651.0 (TID 23482, amp-jenkins-worker-06.amp, executor driver): org.apache.spark.SparkException: Could not read footer for file: file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-1fea2310-fc34-453c-9de8-993d62357f7f/third/part-00000-99708f66-1727-4742-a45c-0431b5b0bf96-c000.json
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:83)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.getFileReader(OrcFileOperator.scala:87)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$readSchema$1(OrcFileOperator.scala:96)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.readSchema(OrcFileOperator.scala:96)
	at org.apache.spark.sql.hive.orc.OrcFileFormat.$anonfun$buildReader$2(OrcFileFormat.scala:163)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:147)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:169)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1804)
	at org.apache.spark.rdd.RDD.$anonfun$count$1(RDD.scala:1229)
	at org.apache.spark.rdd.RDD.$anonfun$count$1$adapted(RDD.scala:1229)
	at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2144)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:460)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:463)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.orc.FileFormatException: Malformed ORC file file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-1fea2310-fc34-453c-9de8-993d62357f7f/third/part-00000-99708f66-1727-4742-a45c-0431b5b0bf96-c000.json. Invalid postscript.
	at org.apache.orc.impl.ReaderImpl.ensureOrcFooter(ReaderImpl.java:275)
	at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:582)
	at org.apache.orc.impl.ReaderImpl.<init>(ReaderImpl.java:370)
	at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:63)
	at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:55)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:76)
	... 32 more

21:22:26.613 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 12651.0 failed 1 times; aborting job
21:22:27.771 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 12656.0 (TID 23487)
org.apache.spark.SparkException: Could not read footer for file: file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-17773775-2694-4217-9589-fb5f6b523791/first/part-00000-b96e9153-ac69-43f6-b7de-eeab67637a7c-c000.json
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:83)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.getFileReader(OrcFileOperator.scala:87)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$readSchema$1(OrcFileOperator.scala:96)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.readSchema(OrcFileOperator.scala:96)
	at org.apache.spark.sql.hive.orc.OrcFileFormat.$anonfun$buildReader$2(OrcFileFormat.scala:163)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:147)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:169)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithoutKey_0$(generated.java:33)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(generated.java:63)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:132)
	at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:460)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:463)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.orc.FileFormatException: Malformed ORC file file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-17773775-2694-4217-9589-fb5f6b523791/first/part-00000-b96e9153-ac69-43f6-b7de-eeab67637a7c-c000.json. Invalid postscript.
	at org.apache.orc.impl.ReaderImpl.ensureOrcFooter(ReaderImpl.java:275)
	at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:582)
	at org.apache.orc.impl.ReaderImpl.<init>(ReaderImpl.java:370)
	at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:63)
	at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:55)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:76)
	... 34 more
21:22:27.786 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 12656.0 (TID 23487, amp-jenkins-worker-06.amp, executor driver): org.apache.spark.SparkException: Could not read footer for file: file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-17773775-2694-4217-9589-fb5f6b523791/first/part-00000-b96e9153-ac69-43f6-b7de-eeab67637a7c-c000.json
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:83)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.getFileReader(OrcFileOperator.scala:87)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$readSchema$1(OrcFileOperator.scala:96)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at scala.collection.TraversableOnce.collectFirst(TraversableOnce.scala:148)
	at scala.collection.TraversableOnce.collectFirst$(TraversableOnce.scala:135)
	at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1429)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.readSchema(OrcFileOperator.scala:96)
	at org.apache.spark.sql.hive.orc.OrcFileFormat.$anonfun$buildReader$2(OrcFileFormat.scala:163)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:147)
	at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:169)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithoutKey_0$(generated.java:33)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(generated.java:63)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:132)
	at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:460)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:463)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.orc.FileFormatException: Malformed ORC file file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-17773775-2694-4217-9589-fb5f6b523791/first/part-00000-b96e9153-ac69-43f6-b7de-eeab67637a7c-c000.json. Invalid postscript.
	at org.apache.orc.impl.ReaderImpl.ensureOrcFooter(ReaderImpl.java:275)
	at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:582)
	at org.apache.orc.impl.ReaderImpl.<init>(ReaderImpl.java:370)
	at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:63)
	at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:55)
	at org.apache.spark.sql.hive.orc.OrcFileOperator$.$anonfun$getFileReader$3(OrcFileOperator.scala:76)
	... 34 more

21:22:27.787 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 12656.0 failed 1 times; aborting job
[info] - Enabling/disabling ignoreCorruptFiles (6 seconds, 335 milliseconds)
[info] - SPARK-27160 Predicate pushdown correctness on DecimalType for ORC (1 second, 176 milliseconds)
[info] - SPARK-8501: Avoids discovery schema from empty ORC files (969 milliseconds)
21:22:31.654 ERROR org.apache.hadoop.hive.ql.io.AcidUtils: Failed to get files with ID; using regular API: Only supported for DFS; got class org.apache.hadoop.hive.ql.io.ProxyLocalFileSystem
21:22:31.751 ERROR org.apache.hadoop.hive.ql.io.AcidUtils: Failed to get files with ID; using regular API: Only supported for DFS; got class org.apache.hadoop.hive.ql.io.ProxyLocalFileSystem
[info] - Verify the ORC conversion parameter: CONVERT_METASTORE_ORC (1 second, 973 milliseconds)
[info] - converted ORC table supports resolving mixed case field (1 second, 543 milliseconds)
[info] - SPARK-20728 Make ORCFileFormat configurable between sql/hive and sql/core (458 milliseconds)
[info] - SPARK-22267 Spark SQL incorrectly reads ORC files when column order is different (1 second, 365 milliseconds)
[info] - SPARK-19809 NullPointerException on zero-size ORC file (767 milliseconds)
[info] - SPARK-23340 Empty float/double array columns raise EOFException !!! IGNORED !!!
21:22:36.110 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/warehouse-9482319d-193f-4949-98a1-6bb129c7a93e/spark_26437 specified for non-external table:spark_26437
21:22:36.906 ERROR org.apache.hadoop.hive.ql.io.AcidUtils: Failed to get files with ID; using regular API: Only supported for DFS; got class org.apache.hadoop.hive.ql.io.ProxyLocalFileSystem
21:22:37.022 ERROR org.apache.hadoop.hive.ql.io.AcidUtils: Failed to get files with ID; using regular API: Only supported for DFS; got class org.apache.hadoop.hive.ql.io.ProxyLocalFileSystem
[info] - SPARK-26437 Can not query decimal type when value is 0 (1 second, 87 milliseconds)
21:22:37.201 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/warehouse-9482319d-193f-4949-98a1-6bb129c7a93e/dummy_orc_partitioned specified for non-external table:dummy_orc_partitioned
21:22:38.281 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/warehouse-9482319d-193f-4949-98a1-6bb129c7a93e/dummy_orc_partitioned specified for non-external table:dummy_orc_partitioned
[info] - SPARK-28573 ORC conversation could be applied for partitioned table insertion (2 seconds, 748 milliseconds)
21:22:40.033 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
21:22:40.033 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
21:22:40.033 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
21:22:40.133 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
21:22:40.133 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
21:22:40.133 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
[info] HiveOrcPartitionDiscoverySuite:
[info] - read partitioned table - normal case (1 second, 932 milliseconds)
[info] - read partitioned table - with nulls (1 second, 641 milliseconds)
[info] - SPARK-27162: handle pathfilter configuration correctly (771 milliseconds)
21:22:44.588 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
21:22:44.589 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
21:22:44.589 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
21:22:44.663 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
21:22:44.663 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
21:22:44.663 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
[info] PruneFileSourcePartitionsSuite:
[info] - PruneFileSourcePartitions should not change the output of LogicalRelation (215 milliseconds)
[info] - SPARK-20986 Reset table's statistics after PruneFileSourcePartitions rule (1 second, 300 milliseconds)
[info] - SPARK-26576 Broadcast hint not applied to partitioned table (969 milliseconds)
21:22:47.271 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
21:22:47.271 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
21:22:47.271 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
21:22:47.424 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
21:22:47.424 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
21:22:47.425 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
[info] HiveOrcFilterSuite:
[info] - filter pushdown - integer (690 milliseconds)
[info] - filter pushdown - long (689 milliseconds)
[info] - filter pushdown - float (696 milliseconds)
[info] - filter pushdown - double (761 milliseconds)
[info] - filter pushdown - string (659 milliseconds)
[info] - filter pushdown - boolean (703 milliseconds)
[info] - filter pushdown - decimal (729 milliseconds)
[info] - filter pushdown - timestamp (666 milliseconds)
[info] - filter pushdown - combinations with logical operators (469 milliseconds)
[info] - no filter pushdown - non-supported types (1 second, 81 milliseconds)
[info] - SPARK-12218 and SPARK-25699 Converting conjunctions into ORC SearchArguments (4 milliseconds)
[info] - SPARK-27699 Converting disjunctions into ORC SearchArguments (1 millisecond)
21:22:54.693 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
21:22:54.693 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
21:22:54.693 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
21:22:54.767 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
21:22:54.767 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
21:22:54.767 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
[info] ParquetHiveCompatibilitySuite:
21:22:56.036 WARN org.apache.spark.sql.execution.command.DropTableCommand: org.apache.spark.sql.AnalysisException: Path does not exist: file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-b1b9555d-58d2-4f54-a94f-0c7e5923b459;
org.apache.spark.sql.AnalysisException: Path does not exist: file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-b1b9555d-58d2-4f54-a94f-0c7e5923b459;
	at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$checkAndGlobPathIfNecessary$4(DataSource.scala:779)
	at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$checkAndGlobPathIfNecessary$4$adapted(DataSource.scala:776)
	at org.apache.spark.util.ThreadUtils$.$anonfun$parmap$2(ThreadUtils.scala:374)
	at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
	at scala.util.Success.$anonfun$map$1(Try.scala:255)
	at scala.util.Success.map(Try.scala:213)
	at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
	at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1402)
	at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
	at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
	at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
	at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
[info] - simple primitives (1 second, 299 milliseconds)
21:22:57.159 WARN org.apache.spark.sql.execution.command.DropTableCommand: org.apache.spark.sql.AnalysisException: Path does not exist: file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-cbdd30bc-0f2f-425c-b0ca-29f23d9d94cb;
org.apache.spark.sql.AnalysisException: Path does not exist: file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-cbdd30bc-0f2f-425c-b0ca-29f23d9d94cb;
	at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$checkAndGlobPathIfNecessary$4(DataSource.scala:779)
	at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$checkAndGlobPathIfNecessary$4$adapted(DataSource.scala:776)
	at org.apache.spark.util.ThreadUtils$.$anonfun$parmap$2(ThreadUtils.scala:374)
	at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
	at scala.util.Success.$anonfun$map$1(Try.scala:255)
	at scala.util.Success.map(Try.scala:213)
	at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
	at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1402)
	at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
	at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
	at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
	at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
[info] - SPARK-10177 timestamp (1 second, 114 milliseconds)
21:22:58.358 WARN org.apache.spark.sql.execution.command.DropTableCommand: org.apache.spark.sql.AnalysisException: Path does not exist: file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-8ffaff09-7ce6-4f72-9b92-fb507643f09a;
org.apache.spark.sql.AnalysisException: Path does not exist: file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-8ffaff09-7ce6-4f72-9b92-fb507643f09a;
	at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$checkAndGlobPathIfNecessary$4(DataSource.scala:779)
	at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$checkAndGlobPathIfNecessary$4$adapted(DataSource.scala:776)
	at org.apache.spark.util.ThreadUtils$.$anonfun$parmap$2(ThreadUtils.scala:374)
	at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
	at scala.util.Success.$anonfun$map$1(Try.scala:255)
	at scala.util.Success.map(Try.scala:213)
	at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
	at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1402)
	at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
	at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
	at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
	at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
[info] - array (1 second, 197 milliseconds)
21:22:59.571 WARN org.apache.spark.sql.execution.command.DropTableCommand: org.apache.spark.sql.AnalysisException: Path does not exist: file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-59df2908-436c-4126-bcdd-9ec85b2e211a;
org.apache.spark.sql.AnalysisException: Path does not exist: file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-59df2908-436c-4126-bcdd-9ec85b2e211a;
	at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$checkAndGlobPathIfNecessary$4(DataSource.scala:779)
	at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$checkAndGlobPathIfNecessary$4$adapted(DataSource.scala:776)
	at org.apache.spark.util.ThreadUtils$.$anonfun$parmap$2(ThreadUtils.scala:374)
	at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
	at scala.util.Success.$anonfun$map$1(Try.scala:255)
	at scala.util.Success.map(Try.scala:213)
	at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
	at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1402)
	at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
	at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
	at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
	at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
[info] - map (1 second, 210 milliseconds)
[info] - map entries with null keys !!! IGNORED !!!
21:23:00.746 WARN org.apache.spark.sql.execution.command.DropTableCommand: org.apache.spark.sql.AnalysisException: Path does not exist: file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-983e1870-e7f5-4c7c-b8f9-3d36e2f79e8d;
org.apache.spark.sql.AnalysisException: Path does not exist: file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-983e1870-e7f5-4c7c-b8f9-3d36e2f79e8d;
	at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$checkAndGlobPathIfNecessary$4(DataSource.scala:779)
	at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$checkAndGlobPathIfNecessary$4$adapted(DataSource.scala:776)
	at org.apache.spark.util.ThreadUtils$.$anonfun$parmap$2(ThreadUtils.scala:374)
	at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
	at scala.util.Success.$anonfun$map$1(Try.scala:255)
	at scala.util.Success.map(Try.scala:213)
	at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
	at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1402)
	at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
	at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
	at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
	at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
[info] - struct (1 second, 177 milliseconds)
21:23:01.915 WARN org.apache.spark.sql.execution.command.DropTableCommand: org.apache.spark.sql.AnalysisException: Path does not exist: file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-ce3f05e3-37a2-437b-b96d-52431f3f0cbb;
org.apache.spark.sql.AnalysisException: Path does not exist: file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-ce3f05e3-37a2-437b-b96d-52431f3f0cbb;
	at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$checkAndGlobPathIfNecessary$4(DataSource.scala:779)
	at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$checkAndGlobPathIfNecessary$4$adapted(DataSource.scala:776)
	at org.apache.spark.util.ThreadUtils$.$anonfun$parmap$2(ThreadUtils.scala:374)
	at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
	at scala.util.Success.$anonfun$map$1(Try.scala:255)
	at scala.util.Success.map(Try.scala:213)
	at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
	at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1402)
	at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
	at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
	at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
	at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
[info] - SPARK-16344: array of struct with a single field named 'array_element' (1 second, 169 milliseconds)
21:23:02.037 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
21:23:02.037 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
21:23:02.037 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
21:23:02.111 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
21:23:02.111 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
21:23:02.111 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
[info] Test run started
[info] Test org.apache.spark.sql.hive.JavaDataFrameSuite.testUDAF started
[info] Test org.apache.spark.sql.hive.JavaDataFrameSuite.saveTableAndQueryIt started
[info] Test run finished: 0 failed, 0 ignored, 2 total, 2.267s
[info] Test run started
[info] Test org.apache.spark.sql.hive.JavaMetastoreDataSourcesSuite.saveTableAndQueryIt started
21:23:04.808 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider org.apache.spark.sql.json. Persisting data source table `default`.`javasavedtable` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.552s
21:23:05.279 WARN io.netty.util.concurrent.SingleThreadEventExecutor: An event executor terminated with non-empty task queue (2)
[info] ScalaTest
[info] Run completed in 1 hour, 44 minutes, 48 seconds.
[info] Total number of tests run: 3497
[info] Suites: completed 128, aborted 0
[info] Tests: succeeded 3497, failed 0, canceled 0, ignored 595, pending 0
[info] All tests passed.
[info] Passed: Total 3500, Failed 0, Errors 0, Passed 3500, Ignored 595
[error] (hive-thriftserver/test:test) sbt.TestsFailedException: Tests unsuccessful
[error] Total time: 6331 s, completed Mar 20, 2020 9:23:48 PM
[error] running /home/jenkins/workspace/NewSparkPullRequestBuilder/build/sbt -Phadoop-2.7 -Phive-2.3 -Phive-thriftserver -Pkinesis-asl -Phadoop-cloud -Pspark-ganglia-lgpl -Pyarn -Phive -Pkubernetes -Pmesos test ; received return code 1
Attempting to post to Github...
 > Post successful.
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Finished: FAILURE