FailedConsole Output

Skipping 13,185 KB.. Full Log
EwsFayAwAsE8VZpQAAAA==- SPARK-24013: unneeded compress can cause performance issues with sorted input (2 seconds, 396 milliseconds)
[info] StringFunctionsSuite:
[info] - string concat (220 milliseconds)
[info] - string concat_ws (176 milliseconds)
[info] - string elt (236 milliseconds)
[info] - string Levenshtein distance (178 milliseconds)
[info] - string regex_replace / regex_extract (205 milliseconds)
[info] - non-matching optional group (216 milliseconds)
[info] - string ascii function (181 milliseconds)
[info] - string base64/unbase64 function (209 milliseconds)
[info] - string overlay function (489 milliseconds)
[info] - binary overlay function (412 milliseconds)
[info] - string / binary substring function (253 milliseconds)
[info] - string encode/decode function (195 milliseconds)
[info] - string translate (179 milliseconds)
[info] - string trim functions (424 milliseconds)
[info] - string formatString function (210 milliseconds)
[info] - soundex function (165 milliseconds)
[info] - string instr function (182 milliseconds)
[info] - string substring_index function (193 milliseconds)
[info] - string locate function (185 milliseconds)
[info] - string padding functions (211 milliseconds)
[info] - string parse_url function (746 milliseconds)
[info] - udf/udf-group-analytics.sql - Regular Python UDF (25 seconds, 485 milliseconds)
[info] - string repeat function (231 milliseconds)
[info] - string reverse function (160 milliseconds)
[info] - string space function (83 milliseconds)
[info] - string split function with no limit (187 milliseconds)
[info] - string split function with limit explicitly set to 0 (175 milliseconds)
[info] - string split function with positive limit (202 milliseconds)
[info] - string split function with negative limit (170 milliseconds)
[info] - string / binary length function (267 milliseconds)
[info] - initcap function (186 milliseconds)
[info] - number format function (728 milliseconds)
[info] - string sentences function (260 milliseconds)
[info] - str_to_map function (213 milliseconds)
00:03:17.396 WARN org.apache.spark.sql.StringFunctionsSuite: 

===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.StringFunctionsSuite, thread names: block-manager-slave-async-thread-pool-89, block-manager-slave-async-thread-pool-92, block-manager-slave-async-thread-pool-6, block-manager-slave-async-thread-pool-21, block-manager-slave-async-thread-pool-10, block-manager-slave-async-thread-pool-42, block-manager-slave-async-thread-pool-69, block-manager-slave-async-thread-pool-84, block-manager-slave-async-thread-pool-73, block-manager-slave-async-thread-pool-20, block-manager-slave-async-thread-pool-40, block-manager-slave-async-thread-pool-70 =====

[info] BucketingUtilsSuite:
[info] - generate bucket id (1 millisecond)
[info] - match bucket ids (2 milliseconds)
[info] FileBasedDataSourceSuite:
[info] - Writing empty datasets should not fail - orc (147 milliseconds)
[info] - Writing empty datasets should not fail - parquet (128 milliseconds)
[info] - Writing empty datasets should not fail - csv (124 milliseconds)
[info] - Writing empty datasets should not fail - json (121 milliseconds)
[info] - Writing empty datasets should not fail - text (119 milliseconds)
[info] - SPARK-23072 Write and read back unicode column names - orc (280 milliseconds)
[info] - SPARK-23072 Write and read back unicode column names - parquet (319 milliseconds)
[info] - SPARK-23072 Write and read back unicode column names - csv (397 milliseconds)
[info] - SPARK-23072 Write and read back unicode column names - json (296 milliseconds)
[info] - SPARK-15474 Write and read back non-empty schema with empty dataframe - orc (270 milliseconds)
[info] - SPARK-15474 Write and read back non-empty schema with empty dataframe - parquet (298 milliseconds)
[info] - SPARK-23271 empty RDD when saved should write a metadata only file - orc (263 milliseconds)
[info] - SPARK-23271 empty RDD when saved should write a metadata only file - parquet (322 milliseconds)
[info] - SPARK-23372 error while writing empty schema files using orc (25 milliseconds)
[info] - SPARK-23372 error while writing empty schema files using parquet (18 milliseconds)
[info] - SPARK-23372 error while writing empty schema files using csv (18 milliseconds)
[info] - SPARK-23372 error while writing empty schema files using json (18 milliseconds)
[info] - SPARK-23372 error while writing empty schema files using text (19 milliseconds)
[info] - SPARK-22146 read files containing special characters using orc (247 milliseconds)
[info] - SPARK-22146 read files containing special characters using parquet (398 milliseconds)
[info] - SPARK-22146 read files containing special characters using csv (340 milliseconds)
[info] - SPARK-22146 read files containing special characters using json (288 milliseconds)
[info] - SPARK-22146 read files containing special characters using text (235 milliseconds)
[info] - SPARK-23148 read files containing special characters using json with multiline enabled (294 milliseconds)
[info] - SPARK-23148 read files containing special characters using csv with multiline enabled (294 milliseconds)
[info] - Enabling/disabling ignoreMissingFiles using orc (1 second, 44 milliseconds)
[info] - Enabling/disabling ignoreMissingFiles using parquet (1 second, 147 milliseconds)
[info] - Enabling/disabling ignoreMissingFiles using csv (1 second, 198 milliseconds)
[info] - Enabling/disabling ignoreMissingFiles using json (1 second, 96 milliseconds)
[info] - Enabling/disabling ignoreMissingFiles using text (1 second, 58 milliseconds)
[info] - SPARK-24691 error handling for unsupported types - text (385 milliseconds)
[info] - SPARK-24204 error handling for unsupported Array/Map/Struct types - csv (712 milliseconds)
[info] - SPARK-24204 error handling for unsupported Interval data types - csv, json, parquet, orc (571 milliseconds)
00:03:30.343 WARN org.apache.spark.sql.catalyst.analysis.SimpleFunctionRegistry: The function testtype replaced a previously registered function.
00:03:30.648 WARN org.apache.spark.sql.catalyst.analysis.SimpleFunctionRegistry: The function testtype replaced a previously registered function.
[info] - SPARK-24204 error handling for unsupported Null data types - csv, parquet, orc (915 milliseconds)
00:03:31.556 ERROR org.apache.spark.executor.Executor: Exception in task 1.0 in stage 137.0 (TID 199)
java.lang.RuntimeException: Found duplicate field(s) "b": [b, B] in case-insensitive mode
	at org.apache.spark.sql.execution.datasources.parquet.ParquetReadSupport$.$anonfun$clipParquetGroupFields$7(ParquetReadSupport.scala:335)
	at scala.Option.map(Option.scala:230)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetReadSupport$.$anonfun$clipParquetGroupFields$6(ParquetReadSupport.scala:330)
	at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
	at scala.collection.Iterator.foreach(Iterator.scala:941)
	at scala.collection.Iterator.foreach$(Iterator.scala:941)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
	at scala.collection.IterableLike.foreach(IterableLike.scala:74)
	at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
	at org.apache.spark.sql.types.StructType.foreach(StructType.scala:99)
	at scala.collection.TraversableLike.map(TraversableLike.scala:238)
	at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
	at org.apache.spark.sql.types.StructType.map(StructType.scala:99)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetReadSupport$.clipParquetGroupFields(ParquetReadSupport.scala:327)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetReadSupport$.clipParquetSchema(ParquetReadSupport.scala:147)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetReadSupport.init(ParquetReadSupport.scala:82)
	at org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase.initialize(SpecificParquetRecordReaderBase.java:141)
	at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.initialize(VectorizedParquetRecordReader.java:131)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.$anonfun$buildReaderWithPartitionValues$2(ParquetFileFormat.scala:320)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:169)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:487)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:726)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:337)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:455)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:458)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
00:03:31.556 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 137.0 (TID 198)
java.lang.RuntimeException: Found duplicate field(s) "b": [b, B] in case-insensitive mode
	at org.apache.spark.sql.execution.datasources.parquet.ParquetReadSupport$.$anonfun$clipParquetGroupFields$7(ParquetReadSupport.scala:335)
	at scala.Option.map(Option.scala:230)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetReadSupport$.$anonfun$clipParquetGroupFields$6(ParquetReadSupport.scala:330)
	at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
	at scala.collection.Iterator.foreach(Iterator.scala:941)
	at scala.collection.Iterator.foreach$(Iterator.scala:941)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
	at scala.collection.IterableLike.foreach(IterableLike.scala:74)
	at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
	at org.apache.spark.sql.types.StructType.foreach(StructType.scala:99)
	at scala.collection.TraversableLike.map(TraversableLike.scala:238)
	at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
	at org.apache.spark.sql.types.StructType.map(StructType.scala:99)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetReadSupport$.clipParquetGroupFields(ParquetReadSupport.scala:327)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetReadSupport$.clipParquetSchema(ParquetReadSupport.scala:147)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetReadSupport.init(ParquetReadSupport.scala:82)
	at org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase.initialize(SpecificParquetRecordReaderBase.java:141)
	at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.initialize(VectorizedParquetRecordReader.java:131)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.$anonfun$buildReaderWithPartitionValues$2(ParquetFileFormat.scala:320)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:169)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:487)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:726)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:337)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:455)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:458)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
00:03:31.557 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 137.0 (TID 199, amp-jenkins-worker-06.amp, executor driver): java.lang.RuntimeException: Found duplicate field(s) "b": [b, B] in case-insensitive mode
	at org.apache.spark.sql.execution.datasources.parquet.ParquetReadSupport$.$anonfun$clipParquetGroupFields$7(ParquetReadSupport.scala:335)
	at scala.Option.map(Option.scala:230)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetReadSupport$.$anonfun$clipParquetGroupFields$6(ParquetReadSupport.scala:330)
	at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
	at scala.collection.Iterator.foreach(Iterator.scala:941)
	at scala.collection.Iterator.foreach$(Iterator.scala:941)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
	at scala.collection.IterableLike.foreach(IterableLike.scala:74)
	at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
	at org.apache.spark.sql.types.StructType.foreach(StructType.scala:99)
	at scala.collection.TraversableLike.map(TraversableLike.scala:238)
	at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
	at org.apache.spark.sql.types.StructType.map(StructType.scala:99)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetReadSupport$.clipParquetGroupFields(ParquetReadSupport.scala:327)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetReadSupport$.clipParquetSchema(ParquetReadSupport.scala:147)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetReadSupport.init(ParquetReadSupport.scala:82)
	at org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase.initialize(SpecificParquetRecordReaderBase.java:141)
	at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.initialize(VectorizedParquetRecordReader.java:131)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.$anonfun$buildReaderWithPartitionValues$2(ParquetFileFormat.scala:320)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:169)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:487)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:726)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:337)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:455)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:458)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

00:03:31.557 ERROR org.apache.spark.scheduler.TaskSetManager: Task 1 in stage 137.0 failed 1 times; aborting job
00:03:31.595 ERROR org.apache.spark.executor.Executor: Exception in task 1.0 in stage 138.0 (TID 201)
java.lang.RuntimeException: Found duplicate field(s) "b": [b, B] in case-insensitive mode
	at org.apache.spark.sql.execution.datasources.parquet.ParquetReadSupport$.$anonfun$clipParquetGroupFields$7(ParquetReadSupport.scala:335)
	at scala.Option.map(Option.scala:230)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetReadSupport$.$anonfun$clipParquetGroupFields$6(ParquetReadSupport.scala:330)
	at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
	at scala.collection.Iterator.foreach(Iterator.scala:941)
	at scala.collection.Iterator.foreach$(Iterator.scala:941)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
	at scala.collection.IterableLike.foreach(IterableLike.scala:74)
	at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
	at org.apache.spark.sql.types.StructType.foreach(StructType.scala:99)
	at scala.collection.TraversableLike.map(TraversableLike.scala:238)
	at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
	at org.apache.spark.sql.types.StructType.map(StructType.scala:99)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetReadSupport$.clipParquetGroupFields(ParquetReadSupport.scala:327)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetReadSupport$.clipParquetSchema(ParquetReadSupport.scala:147)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetReadSupport.init(ParquetReadSupport.scala:82)
	at org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase.initialize(SpecificParquetRecordReaderBase.java:141)
	at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.initialize(VectorizedParquetRecordReader.java:131)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.$anonfun$buildReaderWithPartitionValues$2(ParquetFileFormat.scala:320)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:169)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:487)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:726)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:337)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:455)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:458)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
00:03:31.597 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 138.0 (TID 201, amp-jenkins-worker-06.amp, executor driver): java.lang.RuntimeException: Found duplicate field(s) "b": [b, B] in case-insensitive mode
	at org.apache.spark.sql.execution.datasources.parquet.ParquetReadSupport$.$anonfun$clipParquetGroupFields$7(ParquetReadSupport.scala:335)
	at scala.Option.map(Option.scala:230)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetReadSupport$.$anonfun$clipParquetGroupFields$6(ParquetReadSupport.scala:330)
	at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
	at scala.collection.Iterator.foreach(Iterator.scala:941)
	at scala.collection.Iterator.foreach$(Iterator.scala:941)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
	at scala.collection.IterableLike.foreach(IterableLike.scala:74)
	at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
	at org.apache.spark.sql.types.StructType.foreach(StructType.scala:99)
	at scala.collection.TraversableLike.map(TraversableLike.scala:238)
	at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
	at org.apache.spark.sql.types.StructType.map(StructType.scala:99)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetReadSupport$.clipParquetGroupFields(ParquetReadSupport.scala:327)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetReadSupport$.clipParquetSchema(ParquetReadSupport.scala:147)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetReadSupport.init(ParquetReadSupport.scala:82)
	at org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase.initialize(SpecificParquetRecordReaderBase.java:141)
	at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.initialize(VectorizedParquetRecordReader.java:131)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.$anonfun$buildReaderWithPartitionValues$2(ParquetFileFormat.scala:320)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:169)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:487)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:726)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:337)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:455)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:458)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

00:03:31.597 ERROR org.apache.spark.scheduler.TaskSetManager: Task 1 in stage 138.0 failed 1 times; aborting job
00:03:31.599 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 138.0 (TID 200, amp-jenkins-worker-06.amp, executor driver): TaskKilled (Stage cancelled)
[info] - Spark native readers should respect spark.sql.caseSensitive - parquet (917 milliseconds)
00:03:32.306 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 151.0 (TID 226)
java.lang.RuntimeException: Found duplicate field(s) "b": [b, B] in case-insensitive mode
	at org.apache.spark.sql.execution.datasources.orc.OrcUtils$.$anonfun$requestedColumnIds$8(OrcUtils.scala:168)
	at org.apache.spark.sql.execution.datasources.orc.OrcUtils$.$anonfun$requestedColumnIds$8$adapted(OrcUtils.scala:162)
	at scala.Option.map(Option.scala:230)
	at org.apache.spark.sql.execution.datasources.orc.OrcUtils$.$anonfun$requestedColumnIds$7(OrcUtils.scala:162)
	at org.apache.spark.sql.execution.datasources.orc.OrcUtils$.$anonfun$requestedColumnIds$7$adapted(OrcUtils.scala:159)
	at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at scala.collection.TraversableLike.map(TraversableLike.scala:238)
	at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
	at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:198)
	at org.apache.spark.sql.execution.datasources.orc.OrcUtils$.requestedColumnIds(OrcUtils.scala:159)
	at org.apache.spark.sql.execution.datasources.orc.OrcFileFormat.$anonfun$buildReaderWithPartitionValues$4(OrcFileFormat.scala:185)
	at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2567)
	at org.apache.spark.sql.execution.datasources.orc.OrcFileFormat.$anonfun$buildReaderWithPartitionValues$2(OrcFileFormat.scala:183)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:169)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:487)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:726)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:337)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:455)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:458)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
00:03:32.307 ERROR org.apache.spark.executor.Executor: Exception in task 1.0 in stage 151.0 (TID 227)
java.lang.RuntimeException: Found duplicate field(s) "b": [b, B] in case-insensitive mode
	at org.apache.spark.sql.execution.datasources.orc.OrcUtils$.$anonfun$requestedColumnIds$8(OrcUtils.scala:168)
	at org.apache.spark.sql.execution.datasources.orc.OrcUtils$.$anonfun$requestedColumnIds$8$adapted(OrcUtils.scala:162)
	at scala.Option.map(Option.scala:230)
	at org.apache.spark.sql.execution.datasources.orc.OrcUtils$.$anonfun$requestedColumnIds$7(OrcUtils.scala:162)
	at org.apache.spark.sql.execution.datasources.orc.OrcUtils$.$anonfun$requestedColumnIds$7$adapted(OrcUtils.scala:159)
	at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at scala.collection.TraversableLike.map(TraversableLike.scala:238)
	at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
	at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:198)
	at org.apache.spark.sql.execution.datasources.orc.OrcUtils$.requestedColumnIds(OrcUtils.scala:159)
	at org.apache.spark.sql.execution.datasources.orc.OrcFileFormat.$anonfun$buildReaderWithPartitionValues$4(OrcFileFormat.scala:185)
	at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2567)
	at org.apache.spark.sql.execution.datasources.orc.OrcFileFormat.$anonfun$buildReaderWithPartitionValues$2(OrcFileFormat.scala:183)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:169)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:487)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:726)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:337)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:455)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:458)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
00:03:32.307 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 151.0 (TID 226, amp-jenkins-worker-06.amp, executor driver): java.lang.RuntimeException: Found duplicate field(s) "b": [b, B] in case-insensitive mode
	at org.apache.spark.sql.execution.datasources.orc.OrcUtils$.$anonfun$requestedColumnIds$8(OrcUtils.scala:168)
	at org.apache.spark.sql.execution.datasources.orc.OrcUtils$.$anonfun$requestedColumnIds$8$adapted(OrcUtils.scala:162)
	at scala.Option.map(Option.scala:230)
	at org.apache.spark.sql.execution.datasources.orc.OrcUtils$.$anonfun$requestedColumnIds$7(OrcUtils.scala:162)
	at org.apache.spark.sql.execution.datasources.orc.OrcUtils$.$anonfun$requestedColumnIds$7$adapted(OrcUtils.scala:159)
	at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at scala.collection.TraversableLike.map(TraversableLike.scala:238)
	at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
	at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:198)
	at org.apache.spark.sql.execution.datasources.orc.OrcUtils$.requestedColumnIds(OrcUtils.scala:159)
	at org.apache.spark.sql.execution.datasources.orc.OrcFileFormat.$anonfun$buildReaderWithPartitionValues$4(OrcFileFormat.scala:185)
	at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2567)
	at org.apache.spark.sql.execution.datasources.orc.OrcFileFormat.$anonfun$buildReaderWithPartitionValues$2(OrcFileFormat.scala:183)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:169)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:487)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:726)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:337)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:455)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:458)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

00:03:32.308 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 151.0 failed 1 times; aborting job
00:03:32.345 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 152.0 (TID 228)
java.lang.RuntimeException: Found duplicate field(s) "b": [b, B] in case-insensitive mode
	at org.apache.spark.sql.execution.datasources.orc.OrcUtils$.$anonfun$requestedColumnIds$8(OrcUtils.scala:168)
	at org.apache.spark.sql.execution.datasources.orc.OrcUtils$.$anonfun$requestedColumnIds$8$adapted(OrcUtils.scala:162)
	at scala.Option.map(Option.scala:230)
	at org.apache.spark.sql.execution.datasources.orc.OrcUtils$.$anonfun$requestedColumnIds$7(OrcUtils.scala:162)
	at org.apache.spark.sql.execution.datasources.orc.OrcUtils$.$anonfun$requestedColumnIds$7$adapted(OrcUtils.scala:159)
	at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at scala.collection.TraversableLike.map(TraversableLike.scala:238)
	at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
	at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:198)
	at org.apache.spark.sql.execution.datasources.orc.OrcUtils$.requestedColumnIds(OrcUtils.scala:159)
	at org.apache.spark.sql.execution.datasources.orc.OrcFileFormat.$anonfun$buildReaderWithPartitionValues$4(OrcFileFormat.scala:185)
	at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2567)
	at org.apache.spark.sql.execution.datasources.orc.OrcFileFormat.$anonfun$buildReaderWithPartitionValues$2(OrcFileFormat.scala:183)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:169)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:487)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:726)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:337)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:455)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:458)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
00:03:32.345 ERROR org.apache.spark.executor.Executor: Exception in task 1.0 in stage 152.0 (TID 229)
java.lang.RuntimeException: Found duplicate field(s) "b": [b, B] in case-insensitive mode
	at org.apache.spark.sql.execution.datasources.orc.OrcUtils$.$anonfun$requestedColumnIds$8(OrcUtils.scala:168)
	at org.apache.spark.sql.execution.datasources.orc.OrcUtils$.$anonfun$requestedColumnIds$8$adapted(OrcUtils.scala:162)
	at scala.Option.map(Option.scala:230)
	at org.apache.spark.sql.execution.datasources.orc.OrcUtils$.$anonfun$requestedColumnIds$7(OrcUtils.scala:162)
	at org.apache.spark.sql.execution.datasources.orc.OrcUtils$.$anonfun$requestedColumnIds$7$adapted(OrcUtils.scala:159)
	at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at scala.collection.TraversableLike.map(TraversableLike.scala:238)
	at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
	at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:198)
	at org.apache.spark.sql.execution.datasources.orc.OrcUtils$.requestedColumnIds(OrcUtils.scala:159)
	at org.apache.spark.sql.execution.datasources.orc.OrcFileFormat.$anonfun$buildReaderWithPartitionValues$4(OrcFileFormat.scala:185)
	at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2567)
	at org.apache.spark.sql.execution.datasources.orc.OrcFileFormat.$anonfun$buildReaderWithPartitionValues$2(OrcFileFormat.scala:183)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:169)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:487)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:726)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:337)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:455)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:458)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
00:03:32.347 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 152.0 (TID 228, amp-jenkins-worker-06.amp, executor driver): java.lang.RuntimeException: Found duplicate field(s) "b": [b, B] in case-insensitive mode
	at org.apache.spark.sql.execution.datasources.orc.OrcUtils$.$anonfun$requestedColumnIds$8(OrcUtils.scala:168)
	at org.apache.spark.sql.execution.datasources.orc.OrcUtils$.$anonfun$requestedColumnIds$8$adapted(OrcUtils.scala:162)
	at scala.Option.map(Option.scala:230)
	at org.apache.spark.sql.execution.datasources.orc.OrcUtils$.$anonfun$requestedColumnIds$7(OrcUtils.scala:162)
	at org.apache.spark.sql.execution.datasources.orc.OrcUtils$.$anonfun$requestedColumnIds$7$adapted(OrcUtils.scala:159)
	at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at scala.collection.TraversableLike.map(TraversableLike.scala:238)
	at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
	at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:198)
	at org.apache.spark.sql.execution.datasources.orc.OrcUtils$.requestedColumnIds(OrcUtils.scala:159)
	at org.apache.spark.sql.execution.datasources.orc.OrcFileFormat.$anonfun$buildReaderWithPartitionValues$4(OrcFileFormat.scala:185)
	at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2567)
	at org.apache.spark.sql.execution.datasources.orc.OrcFileFormat.$anonfun$buildReaderWithPartitionValues$2(OrcFileFormat.scala:183)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:169)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
	at org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:487)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:726)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:337)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:455)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:458)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

00:03:32.347 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 152.0 failed 1 times; aborting job
[info] - Spark native readers should respect spark.sql.caseSensitive - orc (734 milliseconds)
[info] - SPARK-25237 compute correct input metrics in FileScanRDD (325 milliseconds)
[info] - Do not use cache on overwrite (1 second, 74 milliseconds)
[info] - Do not use cache on append (1 second, 12 milliseconds)
[info] - UDF input_file_name() (399 milliseconds)
[info] - Option pathGlobFilter: filter files correctly (579 milliseconds)
[info] - Option pathGlobFilter: simple extension filtering should contains partition info (614 milliseconds)
[info] - Option recursiveFileLookup: recursive loading correctly (113 milliseconds)
[info] - Option recursiveFileLookup: disable partition inferring (40 milliseconds)
[info] - Return correct results when data columns overlap with partition columns (773 milliseconds)
[info] - sizeInBytes should be the total size of all files (314 milliseconds)
[info] - SPARK-22790,SPARK-27668: spark.sql.sources.compressionFactor takes effect (701 milliseconds)
[info] - File table location should include both values of option `path` and `paths` (593 milliseconds)
[info] PythonForeachWriterSuite:
Exception in thread "Thread-79327" java.lang.InterruptedException
	at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2173)
	at org.apache.spark.sql.execution.python.PythonForeachWriter$UnsafeRowBuffer.$anonfun$remove$1(PythonForeachWriter.scala:123)
	at org.apache.spark.sql.execution.python.PythonForeachWriter$UnsafeRowBuffer.withLock(PythonForeachWriter.scala:150)
	at org.apache.spark.sql.execution.python.PythonForeachWriter$UnsafeRowBuffer.org$apache$spark$sql$execution$python$PythonForeachWriter$UnsafeRowBuffer$$remove(PythonForeachWriter.scala:121)
	at org.apache.spark.sql.execution.python.PythonForeachWriter$UnsafeRowBuffer$$anon$1.getNext(PythonForeachWriter.scala:106)
	at org.apache.spark.sql.execution.python.PythonForeachWriter$UnsafeRowBuffer$$anon$1.getNext(PythonForeachWriter.scala:104)
	at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
	at org.apache.spark.sql.execution.python.PythonForeachWriterSuite$BufferTester$$anon$1.run(PythonForeachWriterSuite.scala:105)
[info] - UnsafeRowBuffer: iterator blocks when no data is available (90 milliseconds)
[info] - UnsafeRowBuffer: iterator unblocks when all data added (9 milliseconds)
Exception in thread "Thread-79329" java.lang.InterruptedException
	at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2173)
	at org.apache.spark.sql.execution.python.PythonForeachWriter$UnsafeRowBuffer.$anonfun$remove$1(PythonForeachWriter.scala:123)
	at org.apache.spark.sql.execution.python.PythonForeachWriter$UnsafeRowBuffer.withLock(PythonForeachWriter.scala:150)
	at org.apache.spark.sql.execution.python.PythonForeachWriter$UnsafeRowBuffer.org$apache$spark$sql$execution$python$PythonForeachWriter$UnsafeRowBuffer$$remove(PythonForeachWriter.scala:121)
	at org.apache.spark.sql.execution.python.PythonForeachWriter$UnsafeRowBuffer$$anon$1.getNext(PythonForeachWriter.scala:106)
	at org.apache.spark.sql.execution.python.PythonForeachWriter$UnsafeRowBuffer$$anon$1.getNext(PythonForeachWriter.scala:104)
	at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
	at org.apache.spark.sql.execution.python.PythonForeachWriterSuite$BufferTester$$anon$1.run(PythonForeachWriterSuite.scala:105)
[info] - UnsafeRowBuffer: handles more data than memory (2 seconds, 172 milliseconds)
[info] TPCDSQuerySuite:
[info] - udf/udf-group-analytics.sql - Scalar Pandas UDF (27 seconds, 111 milliseconds)
[info] - q1 (735 milliseconds)
[info] - udf/udf-udaf.sql - Scala UDF (1 second, 56 milliseconds)
[info] - q2 (663 milliseconds)
[info] - q3 (210 milliseconds)
[info] - udf/udf-udaf.sql - Regular Python UDF (1 second, 369 milliseconds)
[info] - udf/udf-udaf.sql - Scalar Pandas UDF (1 second, 382 milliseconds)
[info] - q4 (2 seconds, 579 milliseconds)
[info] - q5 (1 second, 1 millisecond)
[info] - q6 (630 milliseconds)
[info] - q7 (339 milliseconds)
[info] - q8 (671 milliseconds)
[info] - q9 (1 second, 174 milliseconds)
[info] - q10 (488 milliseconds)
[info] - q11 (1 second, 298 milliseconds)
[info] - q12 (236 milliseconds)
[info] - q13 (407 milliseconds)
[info] - q14a (3 seconds, 438 milliseconds)
[info] - q14b (2 seconds, 656 milliseconds)
[info] - q15 (295 milliseconds)
[info] - q16 (365 milliseconds)
[info] - q17 (555 milliseconds)
[info] - q18 (541 milliseconds)
[info] - q19 (381 milliseconds)
[info] - q20 (222 milliseconds)
[info] - q21 (273 milliseconds)
[info] - q22 (289 milliseconds)
[info] - udf/postgreSQL/udf-aggregates_part1.sql - Scala UDF (16 seconds, 446 milliseconds)
[info] - q23a (1 second, 559 milliseconds)
[info] - q23b (2 seconds, 77 milliseconds)
[info] - q24a (1 second, 32 milliseconds)
[info] - q24b (1 second, 25 milliseconds)
[info] - q25 (498 milliseconds)
[info] - q26 (328 milliseconds)
[info] - q27 (365 milliseconds)
[info] - q28 (544 milliseconds)
[info] - q29 (486 milliseconds)
[info] - q30 (732 milliseconds)
[info] - q31 (1 second, 12 milliseconds)
[info] - q32 (470 milliseconds)
[info] - q33 (851 milliseconds)
[info] - q34 (352 milliseconds)
[info] - q35 (472 milliseconds)
[info] - q36 (344 milliseconds)
[info] - q37 (224 milliseconds)
[info] - q38 (574 milliseconds)
[info] - q39a (708 milliseconds)
[info] - q39b (549 milliseconds)
[info] - q40 (354 milliseconds)
[info] - q41 (447 milliseconds)
[info] - q42 (205 milliseconds)
[info] - q43 (248 milliseconds)
00:04:17.075 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:04:17.075 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - q44 (705 milliseconds)
[info] - q45 (339 milliseconds)
[info] - q46 (402 milliseconds)
[info] - q47 (1 second, 389 milliseconds)
[info] - q48 (376 milliseconds)
00:04:20.601 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:04:20.601 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:04:20.602 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:04:20.602 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:04:20.603 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:04:20.603 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - q49 (1 second, 51 milliseconds)
[info] - q50 (489 milliseconds)
[info] - q51 (373 milliseconds)
[info] - q52 (205 milliseconds)
[info] - q53 (332 milliseconds)
[info] - q54 (712 milliseconds)
[info] - q55 (231 milliseconds)
[info] - q56 (818 milliseconds)
[info] - q57 (1 second, 312 milliseconds)
[info] - q58 (1 second, 228 milliseconds)
[info] - q59 (648 milliseconds)
[info] - q60 (595 milliseconds)
[info] - q61 (674 milliseconds)
[info] - udf/postgreSQL/udf-aggregates_part1.sql - Regular Python UDF (26 seconds, 773 milliseconds)
[info] - q62 (335 milliseconds)
[info] - q63 (326 milliseconds)
[info] - q64 (2 seconds, 974 milliseconds)
[info] - q65 (610 milliseconds)
[info] - q66 (1 second, 623 milliseconds)
[info] - q67 (376 milliseconds)
[info] - q68 (389 milliseconds)
[info] - q69 (500 milliseconds)
[info] - q70 (712 milliseconds)
[info] - q71 (455 milliseconds)
[info] - q72 (676 milliseconds)
[info] - q73 (306 milliseconds)
[info] - q74 (1 second, 383 milliseconds)
[info] - q75 (2 seconds, 27 milliseconds)
[info] - q76 (443 milliseconds)
[info] - q77 (1 second, 272 milliseconds)
[info] - q78 (952 milliseconds)
[info] - q79 (362 milliseconds)
[info] - q80 (1 second, 291 milliseconds)
[info] - q81 (836 milliseconds)
[info] - q82 (261 milliseconds)
[info] - q83 (718 milliseconds)
[info] - q84 (266 milliseconds)
[info] - q85 (571 milliseconds)
[info] - q86 (308 milliseconds)
[info] - q87 (592 milliseconds)
[info] - q88 (1 second, 400 milliseconds)
[info] - q89 (322 milliseconds)
[info] - q90 (396 milliseconds)
[info] - q91 (389 milliseconds)
[info] - q92 (492 milliseconds)
[info] - q93 (247 milliseconds)
[info] - q94 (377 milliseconds)
[info] - q95 (610 milliseconds)
[info] - q96 (200 milliseconds)
[info] - q97 (246 milliseconds)
[info] - q98 (263 milliseconds)
[info] - q99 (315 milliseconds)
[info] - udf/postgreSQL/udf-aggregates_part1.sql - Scalar Pandas UDF (27 seconds, 175 milliseconds)
[info] - q5a-v2.7 (2 seconds, 208 milliseconds)
[info] - udf/postgreSQL/udf-aggregates_part3.sql - Scala UDF (1 second, 40 milliseconds)
[info] - q6-v2.7 (521 milliseconds)
[info] - q10a-v2.7 (492 milliseconds)
[info] - udf/postgreSQL/udf-aggregates_part3.sql - Regular Python UDF (1 second, 352 milliseconds)
[info] - q11-v2.7 (1 second, 374 milliseconds)
[info] - q12-v2.7 (259 milliseconds)
[info] - udf/postgreSQL/udf-aggregates_part3.sql - Scalar Pandas UDF (1 second, 387 milliseconds)
Attempting to post to Github...
[error] running /home/jenkins/workspace/SparkPullRequestBuilder/build/sbt -Phadoop-2.7 -Phive-thriftserver -Phive -Dtest.exclude.tags=org.apache.spark.tags.ExtendedHiveTest,org.apache.spark.tags.ExtendedYarnTest hive-thriftserver/test avro/test mllib/test hive/test repl/test catalyst/test sql/test sql-kafka-0-10/test examples/test ; process was terminated by signal 9
 > Post successful.
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Test FAILed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/112436/
Test FAILed.
Finished: FAILURE