FailedConsole Output

Skipping 10,543 KB.. Full Log
27 milliseconds)
[info] - Create: partitioned by bucket(4, id) (23 milliseconds)
[info] - Create: fail if table already exists (20 milliseconds)
[info] - Replace: basic behavior (197 milliseconds)
[info] - Replace: partitioned table (170 milliseconds)
[info] - Replace: fail if table does not exist (13 milliseconds)
[info] - CreateOrReplace: table does not exist (68 milliseconds)
[info] - CreateOrReplace: table exists (165 milliseconds)
00:02:22.749 WARN org.apache.spark.sql.DataFrameWriterV2Suite: 

===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.DataFrameWriterV2Suite, thread names: block-manager-slave-async-thread-pool-89, block-manager-slave-async-thread-pool-67, block-manager-slave-async-thread-pool-5, block-manager-slave-async-thread-pool-41, block-manager-slave-async-thread-pool-1, block-manager-slave-async-thread-pool-57, block-manager-slave-async-thread-pool-48, block-manager-slave-async-thread-pool-8, block-manager-slave-async-thread-pool-60, block-manager-slave-async-thread-pool-43, block-manager-slave-async-thread-pool-84, block-manager-slave-async-thread-pool-12, block-manager-slave-async-thread-pool-97, block-manager-slave-async-thread-pool-91 =====

[info] FileSourceStrategySuite:
[info] - unpartitioned table, single partition (63 milliseconds)
[info] - unpartitioned table, multiple partitions (34 milliseconds)
[info] - Unpartitioned table, large file that gets split (50 milliseconds)
[info] - Unpartitioned table, many files that get split (44 milliseconds)
[info] - partitioned table (57 milliseconds)
[info] - partitioned table - case insensitive (55 milliseconds)
[info] - partitioned table - after scan filters (57 milliseconds)
[info] - bucketed table (43 milliseconds)
[info] - Locality support for FileScanRDD (3 milliseconds)
[info] - Locality support for FileScanRDD - one file per partition (31 milliseconds)
[info] - Locality support for FileScanRDD - large file (20 milliseconds)
[info] - SPARK-15654 do not split non-splittable files (76 milliseconds)
[info] - SPARK-14959: Do not call getFileBlockLocations on directories (135 milliseconds)
[info] - [SPARK-16818] partition pruned file scans implement sameResult correctly (2 seconds, 899 milliseconds)
[info] - postgreSQL/text.sql (5 seconds, 534 milliseconds)
[info] - [SPARK-16818] exchange reuse respects differences in partition pruning (1 second, 282 milliseconds)
00:02:27.778 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 12.0 (TID 210)
java.io.EOFException: Unexpected end of input stream
	at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:145)
	at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
	at java.io.InputStream.read(InputStream.java:101)
	at org.apache.hadoop.util.LineReader.fillBuffer(LineReader.java:180)
	at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216)
	at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)
	at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.skipUtfByteOrderMark(LineRecordReader.java:144)
	at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:184)
	at org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)
	at org.apache.spark.sql.execution.datasources.HadoopFileLinesReader.hasNext(HadoopFileLinesReader.scala:69)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.sql.execution.datasources.v2.PartitionReaderFromIterator.next(PartitionReaderFromIterator.scala:26)
	at org.apache.spark.sql.execution.datasources.v2.PartitionReaderWithPartitionValues.next(PartitionReaderWithPartitionValues.scala:48)
	at org.apache.spark.sql.execution.datasources.v2.PartitionedFileReader.next(FilePartitionReaderFactory.scala:54)
	at org.apache.spark.sql.execution.datasources.v2.FilePartitionReader.next(FilePartitionReader.scala:70)
	at org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.hasNext(DataSourceRDD.scala:62)
	at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:726)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:337)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:455)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:458)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
00:02:27.788 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 12.0 (TID 210, amp-jenkins-worker-02.amp, executor driver): java.io.EOFException: Unexpected end of input stream
	at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:145)
	at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
	at java.io.InputStream.read(InputStream.java:101)
	at org.apache.hadoop.util.LineReader.fillBuffer(LineReader.java:180)
	at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216)
	at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)
	at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.skipUtfByteOrderMark(LineRecordReader.java:144)
	at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:184)
	at org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)
	at org.apache.spark.sql.execution.datasources.HadoopFileLinesReader.hasNext(HadoopFileLinesReader.scala:69)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.sql.execution.datasources.v2.PartitionReaderFromIterator.next(PartitionReaderFromIterator.scala:26)
	at org.apache.spark.sql.execution.datasources.v2.PartitionReaderWithPartitionValues.next(PartitionReaderWithPartitionValues.scala:48)
	at org.apache.spark.sql.execution.datasources.v2.PartitionedFileReader.next(FilePartitionReaderFactory.scala:54)
	at org.apache.spark.sql.execution.datasources.v2.FilePartitionReader.next(FilePartitionReader.scala:70)
	at org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.hasNext(DataSourceRDD.scala:62)
	at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:726)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:337)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:455)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:458)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

00:02:27.788 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 12.0 failed 1 times; aborting job
00:02:27.860 WARN org.apache.spark.sql.execution.datasources.v2.FilePartitionReader: Skipped the rest of the content in the corrupted file: path: file:///home/jenkins/workspace/SparkPullRequestBuilder@3/target/tmp/input-7650527873736512002.gz, range: 0-12, partition values: [empty row]
java.io.EOFException: Unexpected end of input stream
	at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:145)
	at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
	at java.io.InputStream.read(InputStream.java:101)
	at org.apache.hadoop.util.LineReader.fillBuffer(LineReader.java:180)
	at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216)
	at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)
	at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.skipUtfByteOrderMark(LineRecordReader.java:144)
	at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:184)
	at org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)
	at org.apache.spark.sql.execution.datasources.HadoopFileLinesReader.hasNext(HadoopFileLinesReader.scala:69)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.sql.execution.datasources.v2.PartitionReaderFromIterator.next(PartitionReaderFromIterator.scala:26)
	at org.apache.spark.sql.execution.datasources.v2.PartitionReaderWithPartitionValues.next(PartitionReaderWithPartitionValues.scala:48)
	at org.apache.spark.sql.execution.datasources.v2.PartitionedFileReader.next(FilePartitionReaderFactory.scala:54)
	at org.apache.spark.sql.execution.datasources.v2.FilePartitionReader.next(FilePartitionReader.scala:70)
	at org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.hasNext(DataSourceRDD.scala:62)
	at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:726)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:337)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:455)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:458)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
[info] - spark.files.ignoreCorruptFiles should work in SQL (182 milliseconds)
[info] - [SPARK-18753] keep pushed-down null literal as a filter in Spark-side post-filter (487 milliseconds)
[info] DataFrameTungstenSuite:
[info] - test simple types (163 milliseconds)
[info] - test struct type (223 milliseconds)
[info] - test nested struct type (278 milliseconds)
[info] - primitive data type accesses in persist data (415 milliseconds)
[info] - access cache multiple times (760 milliseconds)
[info] - access only some column of the all of columns (296 milliseconds)
00:02:30.773 WARN org.apache.spark.sql.DataFrameTungstenSuite: 

===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.DataFrameTungstenSuite, thread names: block-manager-slave-async-thread-pool-15, block-manager-slave-async-thread-pool-26, block-manager-slave-async-thread-pool-5, block-manager-slave-async-thread-pool-6, block-manager-slave-async-thread-pool-21, block-manager-slave-async-thread-pool-1, block-manager-slave-async-thread-pool-10, block-manager-slave-async-thread-pool-32, block-manager-slave-async-thread-pool-25, block-manager-slave-async-thread-pool-14, block-manager-slave-async-thread-pool-29, block-manager-ask-thread-pool-16, block-manager-slave-async-thread-pool-4, block-manager-slave-async-thread-pool-7, block-manager-ask-thread-pool-12, block-manager-slave-async-thread-pool-0, block-manager-slave-async-thread-pool-28, block-manager-slave-async-thread-pool-17, block-manager-slave-async-thread-pool-11, block-manager-slave-async-thread-pool-8, block-manager-slave-async-thread-pool-16, block-manager-slave-async-thread-pool-12, block-manager-slave-async-thread-pool-2, block-manager-slave-async-thread-pool-20, block-manager-slave-async-thread-pool-19, block-manager-slave-async-thread-pool-13, block-manager-slave-async-thread-pool-24 =====

[info] GenericWordSpecSuite:
[info] A Simple Dataset
[info]   when looked at as complete rows
[info]   - should have the specified number of elements (139 milliseconds)
[info]   - should have the specified number of unique elements (274 milliseconds)
[info]   when refined to specific columns
[info]   - should have the specified number of elements in each column (124 milliseconds)
[info]   - should have the correct number of distinct elements in each column (470 milliseconds)
[info] DDLSourceLoadSuite:
[info] - data sources with the same name - internal data sources (23 milliseconds)
00:02:32.125 WARN org.apache.spark.sql.execution.datasources.DataSource: Multiple sources found for datasource (org.apache.spark.sql.sources.FakeSourceFour, org.apache.fakesource.FakeExternalSourceThree), defaulting to the internal datasource (org.apache.spark.sql.sources.FakeSourceFour).
00:02:32.126 WARN org.apache.spark.sql.execution.datasources.DataSource: Multiple sources found for datasource (org.apache.spark.sql.sources.FakeSourceFour, org.apache.fakesource.FakeExternalSourceThree), defaulting to the internal datasource (org.apache.spark.sql.sources.FakeSourceFour).
[info] - data sources with the same name - internal data source/external data source (7 milliseconds)
[info] - data sources with the same name - external data sources (3 milliseconds)
[info] - load data source from format alias (4 milliseconds)
[info] - specify full classname with duplicate formats (5 milliseconds)
[info] JsonFunctionsSuite:
[info] - function get_json_object (223 milliseconds)
[info] - function get_json_object - null (208 milliseconds)
[info] - json_tuple select (270 milliseconds)
[info] - json_tuple filter and group (540 milliseconds)
[info] - from_json (157 milliseconds)
[info] - from_json with option (133 milliseconds)
[info] - from_json missing columns (77 milliseconds)
[info] - from_json invalid json (85 milliseconds)
[info] - from_json - json doesn't conform to the array type (120 milliseconds)
[info] - from_json array support (135 milliseconds)
[info] - from_json uses DDL strings for defining a schema - java (130 milliseconds)
[info] - from_json uses DDL strings for defining a schema - scala (96 milliseconds)
[info] - to_json - struct (138 milliseconds)
[info] - to_json - array (290 milliseconds)
[info] - to_json - map (283 milliseconds)
[info] - to_json with option (126 milliseconds)
[info] - to_json - key types of map don't matter (137 milliseconds)
[info] - to_json unsupported type (21 milliseconds)
[info] - roundtrip in to_json and from_json - struct (282 milliseconds)
[info] - roundtrip in to_json and from_json - array (339 milliseconds)
[info] - SPARK-19637 Support to_json in SQL (252 milliseconds)
[info] - SPARK-19967 Support from_json in SQL (382 milliseconds)
[info] - SPARK-24027: from_json - map<string, int> (149 milliseconds)
[info] - SPARK-24027: from_json - map<string, struct> (157 milliseconds)
[info] - SPARK-24027: from_json - map<string, map<string, int>> (161 milliseconds)
[info] - SPARK-24027: roundtrip - from_json -> to_json  - map<string, string> (91 milliseconds)
[info] - SPARK-24027: roundtrip - to_json -> from_json  - map<string, string> (127 milliseconds)
[info] - SPARK-24027: from_json - wrong map<string, int> (120 milliseconds)
[info] - SPARK-24027: from_json of a map with unsupported key type (284 milliseconds)
[info] - SPARK-24709: infers schemas of json strings and pass them to from_json (12 milliseconds)
[info] - infers schemas using options (139 milliseconds)
[info] - from_json - array of primitive types (197 milliseconds)
[info] - from_json - array of primitive types - malformed row (161 milliseconds)
[info] - from_json - array of arrays (216 milliseconds)
[info] - from_json - array of arrays - malformed row (177 milliseconds)
[info] - from_json - array of structs (184 milliseconds)
[info] - from_json - array of structs - malformed row (172 milliseconds)
[info] - from_json - array of maps (215 milliseconds)
[info] - from_json - array of maps - malformed row (144 milliseconds)
[info] - to_json - array of primitive types (132 milliseconds)
[info] - roundtrip to_json -> from_json - array of primitive types (125 milliseconds)
[info] - roundtrip from_json -> to_json - array of primitive types (91 milliseconds)
[info] - roundtrip from_json -> to_json - array of arrays (85 milliseconds)
[info] - roundtrip from_json -> to_json - array of maps (103 milliseconds)
[info] - roundtrip from_json -> to_json - array of structs (100 milliseconds)
[info] - pretty print - roundtrip from_json -> to_json (100 milliseconds)
00:02:40.370 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 114.0 (TID 146)
org.apache.spark.SparkException: Malformed records are detected in record parsing. Parse Mode: FAILFAST. To process malformed records as null result, try setting the option 'mode' as 'PERMISSIVE'.
	at org.apache.spark.sql.catalyst.util.FailureSafeParser.parse(FailureSafeParser.scala:70)
	at org.apache.spark.sql.catalyst.expressions.JsonToStructs.nullSafeEval(jsonExpressions.scala:594)
	at org.apache.spark.sql.catalyst.expressions.UnaryExpression.eval(Expression.scala:460)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:338)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:455)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:458)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.sql.catalyst.util.BadRecordException: com.fasterxml.jackson.core.JsonParseException: Unexpected character ('1' (code 49)): was expecting a colon to separate field name and value
 at [Source: (InputStreamReader); line: 1, column: 7]
	at org.apache.spark.sql.catalyst.json.JacksonParser.parse(JacksonParser.scala:417)
	at org.apache.spark.sql.catalyst.expressions.JsonToStructs.$anonfun$parser$3(jsonExpressions.scala:582)
	at org.apache.spark.sql.catalyst.util.FailureSafeParser.parse(FailureSafeParser.scala:60)
	... 19 more
Caused by: com.fasterxml.jackson.core.JsonParseException: Unexpected character ('1' (code 49)): was expecting a colon to separate field name and value
 at [Source: (InputStreamReader); line: 1, column: 7]
	at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1840)
	at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:712)
	at com.fasterxml.jackson.core.base.ParserMinimalBase._reportUnexpectedChar(ParserMinimalBase.java:637)
	at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._skipColon2(ReaderBasedJsonParser.java:2220)
	at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._skipColon(ReaderBasedJsonParser.java:2199)
	at com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextToken(ReaderBasedJsonParser.java:710)
	at org.apache.spark.sql.catalyst.json.JacksonUtils$.nextUntil(JacksonUtils.scala:29)
	at org.apache.spark.sql.catalyst.json.JacksonParser.org$apache$spark$sql$catalyst$json$JacksonParser$$convertObject(JacksonParser.scala:336)
	at org.apache.spark.sql.catalyst.json.JacksonParser$$anonfun$$nestedInanonfun$makeStructRootConverter$3$1.applyOrElse(JacksonParser.scala:84)
	at org.apache.spark.sql.catalyst.json.JacksonParser$$anonfun$$nestedInanonfun$makeStructRootConverter$3$1.applyOrElse(JacksonParser.scala:83)
	at org.apache.spark.sql.catalyst.json.JacksonParser.parseJsonToken(JacksonParser.scala:301)
	at org.apache.spark.sql.catalyst.json.JacksonParser.$anonfun$makeStructRootConverter$3(JacksonParser.scala:83)
	at org.apache.spark.sql.catalyst.json.JacksonParser.$anonfun$parse$2(JacksonParser.scala:406)
	at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2567)
	at org.apache.spark.sql.catalyst.json.JacksonParser.parse(JacksonParser.scala:401)
	... 21 more
00:02:40.398 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 114.0 (TID 146, amp-jenkins-worker-02.amp, executor driver): org.apache.spark.SparkException: Malformed records are detected in record parsing. Parse Mode: FAILFAST. To process malformed records as null result, try setting the option 'mode' as 'PERMISSIVE'.
	at org.apache.spark.sql.catalyst.util.FailureSafeParser.parse(FailureSafeParser.scala:70)
	at org.apache.spark.sql.catalyst.expressions.JsonToStructs.nullSafeEval(jsonExpressions.scala:594)
	at org.apache.spark.sql.catalyst.expressions.UnaryExpression.eval(Expression.scala:460)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:338)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:455)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:458)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.sql.catalyst.util.BadRecordException: com.fasterxml.jackson.core.JsonParseException: Unexpected character ('1' (code 49)): was expecting a colon to separate field name and value
 at [Source: (InputStreamReader); line: 1, column: 7]
	at org.apache.spark.sql.catalyst.json.JacksonParser.parse(JacksonParser.scala:417)
	at org.apache.spark.sql.catalyst.expressions.JsonToStructs.$anonfun$parser$3(jsonExpressions.scala:582)
	at org.apache.spark.sql.catalyst.util.FailureSafeParser.parse(FailureSafeParser.scala:60)
	... 19 more
Caused by: com.fasterxml.jackson.core.JsonParseException: Unexpected character ('1' (code 49)): was expecting a colon to separate field name and value
 at [Source: (InputStreamReader); line: 1, column: 7]
	at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1840)
	at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:712)
	at com.fasterxml.jackson.core.base.ParserMinimalBase._reportUnexpectedChar(ParserMinimalBase.java:637)
	at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._skipColon2(ReaderBasedJsonParser.java:2220)
	at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._skipColon(ReaderBasedJsonParser.java:2199)
	at com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextToken(ReaderBasedJsonParser.java:710)
	at org.apache.spark.sql.catalyst.json.JacksonUtils$.nextUntil(JacksonUtils.scala:29)
	at org.apache.spark.sql.catalyst.json.JacksonParser.org$apache$spark$sql$catalyst$json$JacksonParser$$convertObject(JacksonParser.scala:336)
	at org.apache.spark.sql.catalyst.json.JacksonParser$$anonfun$$nestedInanonfun$makeStructRootConverter$3$1.applyOrElse(JacksonParser.scala:84)
	at org.apache.spark.sql.catalyst.json.JacksonParser$$anonfun$$nestedInanonfun$makeStructRootConverter$3$1.applyOrElse(JacksonParser.scala:83)
	at org.apache.spark.sql.catalyst.json.JacksonParser.parseJsonToken(JacksonParser.scala:301)
	at org.apache.spark.sql.catalyst.json.JacksonParser.$anonfun$makeStructRootConverter$3(JacksonParser.scala:83)
	at org.apache.spark.sql.catalyst.json.JacksonParser.$anonfun$parse$2(JacksonParser.scala:406)
	at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2567)
	at org.apache.spark.sql.catalyst.json.JacksonParser.parse(JacksonParser.scala:401)
	... 21 more

00:02:40.398 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 114.0 failed 1 times; aborting job
00:02:40.448 ERROR org.apache.spark.executor.Executor: Exception in task 1.0 in stage 115.0 (TID 149)
java.lang.IllegalArgumentException: from_json() doesn't support the DROPMALFORMED mode. Acceptable modes are PERMISSIVE and FAILFAST.
	at org.apache.spark.sql.catalyst.expressions.JsonToStructs.parser$lzycompute(jsonExpressions.scala:568)
	at org.apache.spark.sql.catalyst.expressions.JsonToStructs.parser(jsonExpressions.scala:563)
	at org.apache.spark.sql.catalyst.expressions.JsonToStructs.nullSafeEval(jsonExpressions.scala:594)
	at org.apache.spark.sql.catalyst.expressions.UnaryExpression.eval(Expression.scala:460)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:338)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:455)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:458)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
00:02:40.449 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 115.0 (TID 148)
java.lang.IllegalArgumentException: from_json() doesn't support the DROPMALFORMED mode. Acceptable modes are PERMISSIVE and FAILFAST.
	at org.apache.spark.sql.catalyst.expressions.JsonToStructs.parser$lzycompute(jsonExpressions.scala:568)
	at org.apache.spark.sql.catalyst.expressions.JsonToStructs.parser(jsonExpressions.scala:563)
	at org.apache.spark.sql.catalyst.expressions.JsonToStructs.nullSafeEval(jsonExpressions.scala:594)
	at org.apache.spark.sql.catalyst.expressions.UnaryExpression.eval(Expression.scala:460)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:338)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:455)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:458)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
00:02:40.455 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 115.0 (TID 148, amp-jenkins-worker-02.amp, executor driver): java.lang.IllegalArgumentException: from_json() doesn't support the DROPMALFORMED mode. Acceptable modes are PERMISSIVE and FAILFAST.
	at org.apache.spark.sql.catalyst.expressions.JsonToStructs.parser$lzycompute(jsonExpressions.scala:568)
	at org.apache.spark.sql.catalyst.expressions.JsonToStructs.parser(jsonExpressions.scala:563)
	at org.apache.spark.sql.catalyst.expressions.JsonToStructs.nullSafeEval(jsonExpressions.scala:594)
	at org.apache.spark.sql.catalyst.expressions.UnaryExpression.eval(Expression.scala:460)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:338)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:127)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:455)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:458)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

00:02:40.455 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 115.0 failed 1 times; aborting job
[info] - from_json invalid json - check modes (251 milliseconds)
[info] - corrupt record column in the middle (139 milliseconds)
[info] - parse timestamps with locale (452 milliseconds)
[info] - special timestamp values (279 milliseconds)
[info] - special date values (293 milliseconds)
00:02:41.687 WARN org.apache.spark.sql.JsonFunctionsSuite: 

===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.JsonFunctionsSuite, thread names: block-manager-slave-async-thread-pool-26, block-manager-slave-async-thread-pool-89, block-manager-slave-async-thread-pool-90, block-manager-slave-async-thread-pool-5, block-manager-slave-async-thread-pool-36, block-manager-slave-async-thread-pool-86, block-manager-slave-async-thread-pool-95, block-manager-slave-async-thread-pool-57, block-manager-slave-async-thread-pool-28, block-manager-slave-async-thread-pool-66, block-manager-slave-async-thread-pool-98, block-manager-slave-async-thread-pool-48, block-manager-slave-async-thread-pool-39, block-manager-slave-async-thread-pool-11, block-manager-slave-async-thread-pool-47, block-manager-slave-async-thread-pool-60, block-manager-slave-async-thread-pool-34, block-manager-slave-async-thread-pool-84, block-manager-slave-async-thread-pool-24 =====

[info] TPCDSQuerySuite:
00:02:42.162 WARN org.apache.spark.sql.catalyst.util.package: Truncated the string representation of a plan since it was too large. This behavior can be adjusted by setting 'spark.sql.debug.maxToStringFields'.
[info] - q1 (1 second, 263 milliseconds)
[info] - q2 (1 second, 261 milliseconds)
[info] - q3 (328 milliseconds)
[info] - q4 (4 seconds, 245 milliseconds)
[info] - q5 (1 second, 931 milliseconds)
[info] - q6 (1 second, 64 milliseconds)
[info] - postgreSQL/aggregates_part1.sql (25 seconds, 923 milliseconds)
[info] - postgreSQL/float4.sql !!! IGNORED !!!
[info] - q7 (505 milliseconds)
[info] - q8 (914 milliseconds)
[info] - q9 (1 second, 295 milliseconds)
[info] - q10 (770 milliseconds)
[info] - q11 (1 second, 969 milliseconds)
[info] - q12 (563 milliseconds)
[info] - q13 (741 milliseconds)
[info] - postgreSQL/select_having.sql (10 seconds, 596 milliseconds)
[info] - postgreSQL/numeric.sql !!! IGNORED !!!
[info] - q14a (4 seconds, 219 milliseconds)
[info] - q14b (2 seconds, 653 milliseconds)
[info] - q15 (270 milliseconds)
[info] - q16 (438 milliseconds)
[info] - q17 (640 milliseconds)
[info] - q18 (637 milliseconds)
[info] - q19 (434 milliseconds)
[info] - q20 (323 milliseconds)
[info] - q21 (343 milliseconds)
[info] - q22 (302 milliseconds)
[info] - q23a (1 second, 825 milliseconds)
[info] - q23b (2 seconds, 360 milliseconds)
[info] - q24a (1 second, 330 milliseconds)
[info] - q24b (988 milliseconds)
[info] - q25 (497 milliseconds)
[info] - q26 (331 milliseconds)
[info] - q27 (448 milliseconds)
[info] - q28 (790 milliseconds)
[info] - q29 (558 milliseconds)
[info] - q30 (867 milliseconds)
[info] - q31 (1 second, 397 milliseconds)
[info] - q32 (448 milliseconds)
[info] - q33 (971 milliseconds)
[info] - q34 (321 milliseconds)
[info] - q35 (438 milliseconds)
[info] - q36 (347 milliseconds)
[info] - q37 (272 milliseconds)
[info] - q38 (665 milliseconds)
[info] - q39a (825 milliseconds)
[info] - q39b (666 milliseconds)
[info] - q40 (359 milliseconds)
[info] - q41 (508 milliseconds)
[info] - q42 (216 milliseconds)
[info] - q43 (271 milliseconds)
00:03:27.743 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:03:27.744 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - q44 (877 milliseconds)
[info] - q45 (357 milliseconds)
[info] - q46 (498 milliseconds)
[info] - q47 (1 second, 542 milliseconds)
[info] - q48 (384 milliseconds)
00:03:31.723 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:03:31.724 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:03:31.724 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:03:31.724 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:03:31.725 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:03:31.725 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - q49 (1 second, 92 milliseconds)
[info] - q50 (448 milliseconds)
[info] - q51 (439 milliseconds)
[info] - q52 (222 milliseconds)
[info] - q53 (399 milliseconds)
[info] - q54 (835 milliseconds)
[info] - q55 (230 milliseconds)
[info] - q56 (896 milliseconds)
[info] - q57 (1 second, 460 milliseconds)
[info] - q58 (1 second, 255 milliseconds)
[info] - q59 (1 second, 11 milliseconds)
[info] - q60 (655 milliseconds)
[info] - q61 (742 milliseconds)
[info] - q62 (339 milliseconds)
[info] - q63 (353 milliseconds)
[info] - q64 (3 seconds, 250 milliseconds)
[info] - q65 (531 milliseconds)
[info] - q66 (1 second, 746 milliseconds)
[info] - q67 (431 milliseconds)
[info] - q68 (637 milliseconds)
[info] - q69 (770 milliseconds)
[info] - q70 (924 milliseconds)
[info] - q71 (717 milliseconds)
[info] - q72 (835 milliseconds)
[info] - q73 (401 milliseconds)
[info] - q74 (1 second, 710 milliseconds)
[info] - postgreSQL/union.sql (52 seconds, 101 milliseconds)
[info] - q75 (2 seconds, 719 milliseconds)
[info] - q76 (514 milliseconds)
[info] - q77 (1 second, 499 milliseconds)
[info] - postgreSQL/interval.sql (2 seconds, 775 milliseconds)
[info] - q78 (1 second, 166 milliseconds)
[info] - q79 (380 milliseconds)
[info] - q80 (1 second, 209 milliseconds)
[info] - q81 (857 milliseconds)
[info] - q82 (268 milliseconds)
[info] - q83 (874 milliseconds)
[info] - q84 (289 milliseconds)
[info] - q85 (668 milliseconds)
[info] - q86 (325 milliseconds)
[info] - q87 (623 milliseconds)
[info] - q88 (1 second, 466 milliseconds)
[info] - q89 (349 milliseconds)
[info] - q90 (422 milliseconds)
[info] - q91 (451 milliseconds)
[info] - q92 (520 milliseconds)
[info] - q93 (245 milliseconds)
[info] - q94 (401 milliseconds)
[info] - q95 (608 milliseconds)
[info] - q96 (213 milliseconds)
[info] - q97 (313 milliseconds)
[info] - q98 (291 milliseconds)
[info] - q99 (345 milliseconds)
[info] - q5a-v2.7 (2 seconds, 393 milliseconds)
[info] - q6-v2.7 (623 milliseconds)
[info] - q10a-v2.7 (597 milliseconds)
[info] - q11-v2.7 (1 second, 523 milliseconds)
[info] - postgreSQL/boolean.sql (17 seconds, 374 milliseconds)
[info] - q12-v2.7 (275 milliseconds)
[info] - q14-v2.7 (3 seconds, 694 milliseconds)
[info] - postgreSQL/case.sql (10 seconds, 301 milliseconds)
[info] - postgreSQL/int4.sql !!! IGNORED !!!
[info] - postgreSQL/int8.sql !!! IGNORED !!!
[info] - postgreSQL/aggregates_part3.sql (1 second, 74 milliseconds)
[info] - postgreSQL/aggregates_part4.sql (628 milliseconds)
[info] - postgreSQL/comments.sql (890 milliseconds)
[info] - q14a-v2.7 (15 seconds, 376 milliseconds)
[info] - q18a-v2.7 (2 seconds, 327 milliseconds)
[info] - q20-v2.7 (305 milliseconds)
[info] - q22-v2.7 (448 milliseconds)
[info] - postgreSQL/strings.sql (10 seconds, 163 milliseconds)
[info] - postgreSQL/timestamp.sql !!! IGNORED !!!
[info] - q22a-v2.7 (1 second, 122 milliseconds)
[info] - q24-v2.7 (1 second, 89 milliseconds)
[info] - q27a-v2.7 (1 second, 70 milliseconds)
[info] - q34-v2.7 (387 milliseconds)
[info] - q35-v2.7 (548 milliseconds)
[info] - q35a-v2.7 (569 milliseconds)
[info] - q36a-v2.7 (863 milliseconds)
[info] - q47-v2.7 (1 second, 651 milliseconds)
00:04:45.901 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:04:45.901 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:04:45.902 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:04:45.902 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:04:45.902 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:04:45.902 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - q49-v2.7 (1 second, 187 milliseconds)
[info] - q51a-v2.7 (2 seconds, 184 milliseconds)
[info] - q57-v2.7 (1 second, 837 milliseconds)
[info] - q64-v2.7 (3 seconds, 490 milliseconds)
[info] - q67a-v2.7 (2 seconds, 232 milliseconds)
[info] - q70a-v2.7 (1 second, 342 milliseconds)
[info] - q72-v2.7 (691 milliseconds)
[info] - q74-v2.7 (1 second, 609 milliseconds)
Attempting to post to Github...
[error] running /home/jenkins/workspace/SparkPullRequestBuilder@3/build/sbt -Phadoop-2.7 -Phive-thriftserver -Phive -Dtest.exclude.tags=org.apache.spark.tags.ExtendedHiveTest,org.apache.spark.tags.ExtendedYarnTest hive-thriftserver/test avro/test mllib/test hive/test repl/test catalyst/test sql/test sql-kafka-0-10/test examples/test ; process was terminated by signal 9
 > Post successful.
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Test FAILed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/112439/
Test FAILed.
Finished: FAILURE