FailedConsole Output

Skipping 9,250 KB.. Full Log
org.apache.spark.sql.execution.datasources.FileFormatWriter: Aborting job 40bba6f0-075a-4df7-ae83-00d1f517119e.
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 156.0 failed 1 times, most recent failure: Lost task 0.0 in stage 156.0 (TID 271) (amp-jenkins-worker-03.amp executor driver): org.apache.spark.SparkException: Task failed while writing rows.
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:296)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:210)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ArithmeticException: Casting 9.223373E18 to long causes overflow
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:756)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:277)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1460)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:286)
	... 9 more

Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2211)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2160)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2159)
	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2159)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1076)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1076)
	at scala.Option.foreach(Option.scala:407)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1076)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2398)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2340)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2329)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:866)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2128)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:200)
	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:178)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:120)
	at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:229)
	at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3681)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:769)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
	at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3679)
	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:229)
	at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:769)
	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
	at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:612)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:769)
	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:607)
	at org.apache.spark.sql.sources.InsertSuite.$anonfun$sql$1(InsertSuite.scala:60)
	at org.apache.spark.sql.sources.InsertSuite.$anonfun$new$139(InsertSuite.scala:732)
	at org.scalatest.Assertions.intercept(Assertions.scala:749)
	at org.scalatest.Assertions.intercept$(Assertions.scala:746)
	at org.scalatest.funsuite.AnyFunSuite.intercept(AnyFunSuite.scala:1562)
	at org.apache.spark.sql.sources.InsertSuite.$anonfun$new$138(InsertSuite.scala:731)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.sql.test.SQLTestUtilsBase.withTable(SQLTestUtils.scala:305)
	at org.apache.spark.sql.test.SQLTestUtilsBase.withTable$(SQLTestUtils.scala:303)
	at org.apache.spark.sql.sources.InsertSuite.withTable(InsertSuite.scala:57)
	at org.apache.spark.sql.sources.InsertSuite.$anonfun$new$137(InsertSuite.scala:728)
	at org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf(SQLHelper.scala:54)
	at org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf$(SQLHelper.scala:38)
	at org.apache.spark.sql.sources.InsertSuite.org$apache$spark$sql$test$SQLTestUtilsBase$$super$withSQLConf(InsertSuite.scala:57)
	at org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf(SQLTestUtils.scala:246)
	at org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf$(SQLTestUtils.scala:244)
	at org.apache.spark.sql.sources.InsertSuite.withSQLConf(InsertSuite.scala:57)
	at org.apache.spark.sql.sources.InsertSuite.$anonfun$new$136(InsertSuite.scala:728)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.funsuite.AnyFunSuiteLike$$anon$1.apply(AnyFunSuiteLike.scala:189)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:176)
	at org.scalatest.funsuite.AnyFunSuiteLike.invokeWithFixture$1(AnyFunSuiteLike.scala:187)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTest$1(AnyFunSuiteLike.scala:199)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTest(AnyFunSuiteLike.scala:199)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTest$(AnyFunSuiteLike.scala:181)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:61)
	at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:234)
	at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:227)
	at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:61)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTests$1(AnyFunSuiteLike.scala:232)
	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:413)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:396)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:475)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTests(AnyFunSuiteLike.scala:232)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTests$(AnyFunSuiteLike.scala:231)
	at org.scalatest.funsuite.AnyFunSuite.runTests(AnyFunSuite.scala:1562)
	at org.scalatest.Suite.run(Suite.scala:1112)
	at org.scalatest.Suite.run$(Suite.scala:1094)
	at org.scalatest.funsuite.AnyFunSuite.org$scalatest$funsuite$AnyFunSuiteLike$$super$run(AnyFunSuite.scala:1562)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$run$1(AnyFunSuiteLike.scala:236)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:535)
	at org.scalatest.funsuite.AnyFunSuiteLike.run(AnyFunSuiteLike.scala:236)
	at org.scalatest.funsuite.AnyFunSuiteLike.run$(AnyFunSuiteLike.scala:235)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:61)
	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:61)
	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:318)
	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:513)
	at sbt.ForkMain$Run.lambda$runTest$1(ForkMain.java:413)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.SparkException: Task failed while writing rows.
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:296)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:210)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	... 3 more
Caused by: java.lang.ArithmeticException: Casting 9.223373E18 to long causes overflow
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:756)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:277)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1460)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:286)
	... 9 more
14:11:45.701 ERROR org.apache.spark.util.Utils: Aborting task
java.lang.ArithmeticException: Casting -9.223373E18 to long causes overflow
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:756)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:277)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1460)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:286)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:210)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
14:11:45.703 ERROR org.apache.spark.sql.execution.datasources.FileFormatWriter: Job job_20201028141145_0157 aborted.
14:11:45.703 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 157.0 (TID 272)
org.apache.spark.SparkException: Task failed while writing rows.
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:296)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:210)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ArithmeticException: Casting -9.223373E18 to long causes overflow
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:756)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:277)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1460)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:286)
	... 9 more
14:11:45.706 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 157.0 (TID 272) (amp-jenkins-worker-03.amp executor driver): org.apache.spark.SparkException: Task failed while writing rows.
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:296)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:210)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ArithmeticException: Casting -9.223373E18 to long causes overflow
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:756)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:277)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1460)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:286)
	... 9 more

14:11:45.706 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 157.0 failed 1 times; aborting job
14:11:45.707 ERROR org.apache.spark.sql.execution.datasources.FileFormatWriter: Aborting job dda9c9ba-ecde-4efa-a80a-382ee0a06d7d.
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 157.0 failed 1 times, most recent failure: Lost task 0.0 in stage 157.0 (TID 272) (amp-jenkins-worker-03.amp executor driver): org.apache.spark.SparkException: Task failed while writing rows.
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:296)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:210)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ArithmeticException: Casting -9.223373E18 to long causes overflow
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:756)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:277)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1460)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:286)
	... 9 more

Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2211)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2160)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2159)
	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2159)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1076)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1076)
	at scala.Option.foreach(Option.scala:407)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1076)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2398)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2340)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2329)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:866)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2128)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:200)
	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:178)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:120)
	at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:229)
	at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3681)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:769)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
	at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3679)
	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:229)
	at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:769)
	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
	at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:612)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:769)
	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:607)
	at org.apache.spark.sql.sources.InsertSuite.$anonfun$sql$1(InsertSuite.scala:60)
	at org.apache.spark.sql.sources.InsertSuite.$anonfun$new$140(InsertSuite.scala:738)
	at org.scalatest.Assertions.intercept(Assertions.scala:749)
	at org.scalatest.Assertions.intercept$(Assertions.scala:746)
	at org.scalatest.funsuite.AnyFunSuite.intercept(AnyFunSuite.scala:1562)
	at org.apache.spark.sql.sources.InsertSuite.$anonfun$new$138(InsertSuite.scala:737)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.sql.test.SQLTestUtilsBase.withTable(SQLTestUtils.scala:305)
	at org.apache.spark.sql.test.SQLTestUtilsBase.withTable$(SQLTestUtils.scala:303)
	at org.apache.spark.sql.sources.InsertSuite.withTable(InsertSuite.scala:57)
	at org.apache.spark.sql.sources.InsertSuite.$anonfun$new$137(InsertSuite.scala:728)
	at org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf(SQLHelper.scala:54)
	at org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf$(SQLHelper.scala:38)
	at org.apache.spark.sql.sources.InsertSuite.org$apache$spark$sql$test$SQLTestUtilsBase$$super$withSQLConf(InsertSuite.scala:57)
	at org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf(SQLTestUtils.scala:246)
	at org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf$(SQLTestUtils.scala:244)
	at org.apache.spark.sql.sources.InsertSuite.withSQLConf(InsertSuite.scala:57)
	at org.apache.spark.sql.sources.InsertSuite.$anonfun$new$136(InsertSuite.scala:728)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.funsuite.AnyFunSuiteLike$$anon$1.apply(AnyFunSuiteLike.scala:189)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:176)
	at org.scalatest.funsuite.AnyFunSuiteLike.invokeWithFixture$1(AnyFunSuiteLike.scala:187)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTest$1(AnyFunSuiteLike.scala:199)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTest(AnyFunSuiteLike.scala:199)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTest$(AnyFunSuiteLike.scala:181)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:61)
	at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:234)
	at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:227)
	at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:61)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTests$1(AnyFunSuiteLike.scala:232)
	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:413)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:396)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:475)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTests(AnyFunSuiteLike.scala:232)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTests$(AnyFunSuiteLike.scala:231)
	at org.scalatest.funsuite.AnyFunSuite.runTests(AnyFunSuite.scala:1562)
	at org.scalatest.Suite.run(Suite.scala:1112)
	at org.scalatest.Suite.run$(Suite.scala:1094)
	at org.scalatest.funsuite.AnyFunSuite.org$scalatest$funsuite$AnyFunSuiteLike$$super$run(AnyFunSuite.scala:1562)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$run$1(AnyFunSuiteLike.scala:236)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:535)
	at org.scalatest.funsuite.AnyFunSuiteLike.run(AnyFunSuiteLike.scala:236)
	at org.scalatest.funsuite.AnyFunSuiteLike.run$(AnyFunSuiteLike.scala:235)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:61)
	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:61)
	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:318)
	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:513)
	at sbt.ForkMain$Run.lambda$runTest$1(ForkMain.java:413)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.SparkException: Task failed while writing rows.
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:296)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:210)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	... 3 more
Caused by: java.lang.ArithmeticException: Casting -9.223373E18 to long causes overflow
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:756)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:277)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1460)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:286)
	... 9 more
[info] - Throw exceptions on inserting out-of-range long value with ANSI casting policy (352 milliseconds)
14:11:45.907 ERROR org.apache.spark.util.Utils: Aborting task
java.lang.ArithmeticException: Decimal(compact,12345,5,2}) cannot be represented as Decimal(3, 2).
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:756)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:277)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1460)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:286)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:210)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
14:11:45.908 ERROR org.apache.spark.sql.execution.datasources.FileFormatWriter: Job job_20201028141145_0158 aborted.
14:11:45.908 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 158.0 (TID 273)
org.apache.spark.SparkException: Task failed while writing rows.
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:296)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:210)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ArithmeticException: Decimal(compact,12345,5,2}) cannot be represented as Decimal(3, 2).
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:756)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:277)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1460)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:286)
	... 9 more
14:11:45.911 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 158.0 (TID 273) (amp-jenkins-worker-03.amp executor driver): org.apache.spark.SparkException: Task failed while writing rows.
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:296)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:210)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ArithmeticException: Decimal(compact,12345,5,2}) cannot be represented as Decimal(3, 2).
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:756)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:277)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1460)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:286)
	... 9 more

14:11:45.911 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 158.0 failed 1 times; aborting job
14:11:45.913 ERROR org.apache.spark.sql.execution.datasources.FileFormatWriter: Aborting job 0ef9f87e-5ca8-4f03-9efb-7efe0e28b220.
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 158.0 failed 1 times, most recent failure: Lost task 0.0 in stage 158.0 (TID 273) (amp-jenkins-worker-03.amp executor driver): org.apache.spark.SparkException: Task failed while writing rows.
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:296)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:210)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ArithmeticException: Decimal(compact,12345,5,2}) cannot be represented as Decimal(3, 2).
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:756)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:277)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1460)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:286)
	... 9 more

Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2211)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2160)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2159)
	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2159)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1076)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1076)
	at scala.Option.foreach(Option.scala:407)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1076)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2398)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2340)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2329)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:866)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2128)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:200)
	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:178)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:120)
	at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:229)
	at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3681)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:769)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
	at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3679)
	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:229)
	at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:769)
	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
	at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:612)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:769)
	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:607)
	at org.apache.spark.sql.sources.InsertSuite.$anonfun$sql$1(InsertSuite.scala:60)
	at org.apache.spark.sql.sources.InsertSuite.$anonfun$new$144(InsertSuite.scala:752)
	at org.scalatest.Assertions.intercept(Assertions.scala:749)
	at org.scalatest.Assertions.intercept$(Assertions.scala:746)
	at org.scalatest.funsuite.AnyFunSuite.intercept(AnyFunSuite.scala:1562)
	at org.apache.spark.sql.sources.InsertSuite.$anonfun$new$143(InsertSuite.scala:751)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.sql.test.SQLTestUtilsBase.withTable(SQLTestUtils.scala:305)
	at org.apache.spark.sql.test.SQLTestUtilsBase.withTable$(SQLTestUtils.scala:303)
	at org.apache.spark.sql.sources.InsertSuite.withTable(InsertSuite.scala:57)
	at org.apache.spark.sql.sources.InsertSuite.$anonfun$new$142(InsertSuite.scala:748)
	at org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf(SQLHelper.scala:54)
	at org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf$(SQLHelper.scala:38)
	at org.apache.spark.sql.sources.InsertSuite.org$apache$spark$sql$test$SQLTestUtilsBase$$super$withSQLConf(InsertSuite.scala:57)
	at org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf(SQLTestUtils.scala:246)
	at org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf$(SQLTestUtils.scala:244)
	at org.apache.spark.sql.sources.InsertSuite.withSQLConf(InsertSuite.scala:57)
	at org.apache.spark.sql.sources.InsertSuite.$anonfun$new$141(InsertSuite.scala:748)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.funsuite.AnyFunSuiteLike$$anon$1.apply(AnyFunSuiteLike.scala:189)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:176)
	at org.scalatest.funsuite.AnyFunSuiteLike.invokeWithFixture$1(AnyFunSuiteLike.scala:187)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTest$1(AnyFunSuiteLike.scala:199)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTest(AnyFunSuiteLike.scala:199)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTest$(AnyFunSuiteLike.scala:181)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:61)
	at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:234)
	at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:227)
	at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:61)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTests$1(AnyFunSuiteLike.scala:232)
	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:413)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:396)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:475)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTests(AnyFunSuiteLike.scala:232)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTests$(AnyFunSuiteLike.scala:231)
	at org.scalatest.funsuite.AnyFunSuite.runTests(AnyFunSuite.scala:1562)
	at org.scalatest.Suite.run(Suite.scala:1112)
	at org.scalatest.Suite.run$(Suite.scala:1094)
	at org.scalatest.funsuite.AnyFunSuite.org$scalatest$funsuite$AnyFunSuiteLike$$super$run(AnyFunSuite.scala:1562)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$run$1(AnyFunSuiteLike.scala:236)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:535)
	at org.scalatest.funsuite.AnyFunSuiteLike.run(AnyFunSuiteLike.scala:236)
	at org.scalatest.funsuite.AnyFunSuiteLike.run$(AnyFunSuiteLike.scala:235)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:61)
	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:61)
	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:318)
	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:513)
	at sbt.ForkMain$Run.lambda$runTest$1(ForkMain.java:413)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.SparkException: Task failed while writing rows.
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:296)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:210)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	... 3 more
Caused by: java.lang.ArithmeticException: Decimal(compact,12345,5,2}) cannot be represented as Decimal(3, 2).
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:756)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:277)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1460)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:286)
	... 9 more
[info] - Throw exceptions on inserting out-of-range decimal value with ANSI casting policy (202 milliseconds)
[info] - SPARK-30844: static partition should also follow StoreAssignmentPolicy (499 milliseconds)
[info] - SPARK-24860: dynamic partition overwrite specified per source without catalog table (1 second, 212 milliseconds)
[info] - SPARK-24583 Wrong schema type in InsertIntoDataSourceCommand (83 milliseconds)
14:11:47.880 ERROR org.apache.spark.util.Utils: Aborting task
org.apache.hadoop.fs.FileAlreadyExistsException: file:/home/jenkins/workspace/SparkPullRequestBuilder/sql/core/spark-warehouse/org.apache.spark.sql.sources.InsertSuite/t/_temporary/0/_temporary/attempt_20201028141147_0175_m_000000_299/part1=1/part-00000-3d9c8da7-7c63-492b-82c3-3e6a99507d75.c000.snappy.parquet already exists
	at org.apache.spark.sql.sources.FileExistingTestFileSystem.create(InsertSuite.scala:908)
	at org.apache.parquet.hadoop.util.HadoopOutputFile.create(HadoopOutputFile.java:74)
	at org.apache.parquet.hadoop.ParquetFileWriter.<init>(ParquetFileWriter.java:248)
	at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:390)
	at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:349)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.<init>(ParquetOutputWriter.scala:37)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anon$1.newInstance(ParquetFileFormat.scala:150)
	at org.apache.spark.sql.execution.datasources.DynamicPartitionDataWriter.newOutputWriter(FileFormatDataWriter.scala:241)
	at org.apache.spark.sql.execution.datasources.DynamicPartitionDataWriter.write(FileFormatDataWriter.scala:262)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:278)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1460)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:286)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:210)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
14:11:47.880 WARN org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not delete file:/home/jenkins/workspace/SparkPullRequestBuilder/sql/core/spark-warehouse/org.apache.spark.sql.sources.InsertSuite/t/_temporary/0/_temporary/attempt_20201028141147_0175_m_000000_299
14:11:47.880 ERROR org.apache.spark.sql.execution.datasources.FileFormatWriter: Job job_20201028141147_0175 aborted.
14:11:47.881 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 175.0 (TID 299)
org.apache.spark.TaskOutputFileAlreadyExistException: org.apache.hadoop.fs.FileAlreadyExistsException: file:/home/jenkins/workspace/SparkPullRequestBuilder/sql/core/spark-warehouse/org.apache.spark.sql.sources.InsertSuite/t/_temporary/0/_temporary/attempt_20201028141147_0175_m_000000_299/part1=1/part-00000-3d9c8da7-7c63-492b-82c3-3e6a99507d75.c000.snappy.parquet already exists
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:294)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:210)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.fs.FileAlreadyExistsException: file:/home/jenkins/workspace/SparkPullRequestBuilder/sql/core/spark-warehouse/org.apache.spark.sql.sources.InsertSuite/t/_temporary/0/_temporary/attempt_20201028141147_0175_m_000000_299/part1=1/part-00000-3d9c8da7-7c63-492b-82c3-3e6a99507d75.c000.snappy.parquet already exists
	at org.apache.spark.sql.sources.FileExistingTestFileSystem.create(InsertSuite.scala:908)
	at org.apache.parquet.hadoop.util.HadoopOutputFile.create(HadoopOutputFile.java:74)
	at org.apache.parquet.hadoop.ParquetFileWriter.<init>(ParquetFileWriter.java:248)
	at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:390)
	at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:349)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.<init>(ParquetOutputWriter.scala:37)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anon$1.newInstance(ParquetFileFormat.scala:150)
	at org.apache.spark.sql.execution.datasources.DynamicPartitionDataWriter.newOutputWriter(FileFormatDataWriter.scala:241)
	at org.apache.spark.sql.execution.datasources.DynamicPartitionDataWriter.write(FileFormatDataWriter.scala:262)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:278)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1460)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:286)
	... 9 more
14:11:47.889 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0.0 in stage 175.0 (TID 299) can not write to output file: org.apache.hadoop.fs.FileAlreadyExistsException: file:/home/jenkins/workspace/SparkPullRequestBuilder/sql/core/spark-warehouse/org.apache.spark.sql.sources.InsertSuite/t/_temporary/0/_temporary/attempt_20201028141147_0175_m_000000_299/part1=1/part-00000-3d9c8da7-7c63-492b-82c3-3e6a99507d75.c000.snappy.parquet already exists; not retrying
14:11:47.890 ERROR org.apache.spark.sql.execution.datasources.FileFormatWriter: Aborting job 06fedad0-2aac-4b71-a750-b94e05317506.
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0.0 in stage 175.0 (TID 299) can not write to output file: org.apache.hadoop.fs.FileAlreadyExistsException: file:/home/jenkins/workspace/SparkPullRequestBuilder/sql/core/spark-warehouse/org.apache.spark.sql.sources.InsertSuite/t/_temporary/0/_temporary/attempt_20201028141147_0175_m_000000_299/part1=1/part-00000-3d9c8da7-7c63-492b-82c3-3e6a99507d75.c000.snappy.parquet already exists
	at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2211)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2160)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2159)
	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2159)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1076)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1076)
	at scala.Option.foreach(Option.scala:407)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1076)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2398)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2340)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2329)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:866)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2128)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:200)
	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:178)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:131)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:175)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:213)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:210)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:171)
	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:127)
	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:126)
	at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:985)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:769)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
	at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:985)
	at org.apache.spark.sql.DataFrameWriter.insertInto(DataFrameWriter.scala:541)
	at org.apache.spark.sql.DataFrameWriter.insertInto(DataFrameWriter.scala:496)
	at org.apache.spark.sql.sources.InsertSuite.$anonfun$new$163(InsertSuite.scala:842)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.scalatest.Assertions.intercept(Assertions.scala:749)
	at org.scalatest.Assertions.intercept$(Assertions.scala:746)
	at org.scalatest.funsuite.AnyFunSuite.intercept(AnyFunSuite.scala:1562)
	at org.apache.spark.sql.sources.InsertSuite.$anonfun$new$162(InsertSuite.scala:841)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.sql.test.SQLTestUtilsBase.withTable(SQLTestUtils.scala:305)
	at org.apache.spark.sql.test.SQLTestUtilsBase.withTable$(SQLTestUtils.scala:303)
	at org.apache.spark.sql.sources.InsertSuite.withTable(InsertSuite.scala:57)
	at org.apache.spark.sql.sources.InsertSuite.$anonfun$new$161(InsertSuite.scala:833)
	at org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf(SQLHelper.scala:54)
	at org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf$(SQLHelper.scala:38)
	at org.apache.spark.sql.sources.InsertSuite.org$apache$spark$sql$test$SQLTestUtilsBase$$super$withSQLConf(InsertSuite.scala:57)
	at org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf(SQLTestUtils.scala:246)
	at org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf$(SQLTestUtils.scala:244)
	at org.apache.spark.sql.sources.InsertSuite.withSQLConf(InsertSuite.scala:57)
	at org.apache.spark.sql.sources.InsertSuite.$anonfun$new$160(InsertSuite.scala:833)
	at org.apache.spark.sql.sources.InsertSuite.$anonfun$new$160$adapted(InsertSuite.scala:829)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.apache.spark.sql.sources.InsertSuite.$anonfun$new$159(InsertSuite.scala:829)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.funsuite.AnyFunSuiteLike$$anon$1.apply(AnyFunSuiteLike.scala:189)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:176)
	at org.scalatest.funsuite.AnyFunSuiteLike.invokeWithFixture$1(AnyFunSuiteLike.scala:187)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTest$1(AnyFunSuiteLike.scala:199)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTest(AnyFunSuiteLike.scala:199)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTest$(AnyFunSuiteLike.scala:181)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:61)
	at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:234)
	at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:227)
	at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:61)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTests$1(AnyFunSuiteLike.scala:232)
	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:413)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:396)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:475)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTests(AnyFunSuiteLike.scala:232)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTests$(AnyFunSuiteLike.scala:231)
	at org.scalatest.funsuite.AnyFunSuite.runTests(AnyFunSuite.scala:1562)
	at org.scalatest.Suite.run(Suite.scala:1112)
	at org.scalatest.Suite.run$(Suite.scala:1094)
	at org.scalatest.funsuite.AnyFunSuite.org$scalatest$funsuite$AnyFunSuiteLike$$super$run(AnyFunSuite.scala:1562)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$run$1(AnyFunSuiteLike.scala:236)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:535)
	at org.scalatest.funsuite.AnyFunSuiteLike.run(AnyFunSuiteLike.scala:236)
	at org.scalatest.funsuite.AnyFunSuiteLike.run$(AnyFunSuiteLike.scala:235)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:61)
	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:61)
	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:318)
	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:513)
	at sbt.ForkMain$Run.lambda$runTest$1(ForkMain.java:413)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
14:11:48.066 ERROR org.apache.spark.util.Utils: Aborting task
org.apache.hadoop.fs.FileAlreadyExistsException: file:/home/jenkins/workspace/SparkPullRequestBuilder/sql/core/spark-warehouse/org.apache.spark.sql.sources.InsertSuite/t/_temporary/0/_temporary/attempt_20201028141147_0176_m_000000_300/part1=1/part-00000-94133fea-250f-4784-9e98-c1a6d9a38e51.c000.snappy.parquet already exists
	at org.apache.spark.sql.sources.FileExistingTestFileSystem.create(InsertSuite.scala:908)
	at org.apache.parquet.hadoop.util.HadoopOutputFile.create(HadoopOutputFile.java:74)
	at org.apache.parquet.hadoop.ParquetFileWriter.<init>(ParquetFileWriter.java:248)
	at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:390)
	at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:349)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.<init>(ParquetOutputWriter.scala:37)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anon$1.newInstance(ParquetFileFormat.scala:150)
	at org.apache.spark.sql.execution.datasources.DynamicPartitionDataWriter.newOutputWriter(FileFormatDataWriter.scala:241)
	at org.apache.spark.sql.execution.datasources.DynamicPartitionDataWriter.write(FileFormatDataWriter.scala:262)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:278)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1460)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:286)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:210)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
14:11:48.067 WARN org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not delete file:/home/jenkins/workspace/SparkPullRequestBuilder/sql/core/spark-warehouse/org.apache.spark.sql.sources.InsertSuite/t/_temporary/0/_temporary/attempt_20201028141147_0176_m_000000_300
14:11:48.067 ERROR org.apache.spark.sql.execution.datasources.FileFormatWriter: Job job_20201028141147_0176 aborted.
14:11:48.067 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 176.0 (TID 300)
org.apache.spark.SparkException: Task failed while writing rows.
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:296)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:210)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.fs.FileAlreadyExistsException: file:/home/jenkins/workspace/SparkPullRequestBuilder/sql/core/spark-warehouse/org.apache.spark.sql.sources.InsertSuite/t/_temporary/0/_temporary/attempt_20201028141147_0176_m_000000_300/part1=1/part-00000-94133fea-250f-4784-9e98-c1a6d9a38e51.c000.snappy.parquet already exists
	at org.apache.spark.sql.sources.FileExistingTestFileSystem.create(InsertSuite.scala:908)
	at org.apache.parquet.hadoop.util.HadoopOutputFile.create(HadoopOutputFile.java:74)
	at org.apache.parquet.hadoop.ParquetFileWriter.<init>(ParquetFileWriter.java:248)
	at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:390)
	at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:349)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.<init>(ParquetOutputWriter.scala:37)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anon$1.newInstance(ParquetFileFormat.scala:150)
	at org.apache.spark.sql.execution.datasources.DynamicPartitionDataWriter.newOutputWriter(FileFormatDataWriter.scala:241)
	at org.apache.spark.sql.execution.datasources.DynamicPartitionDataWriter.write(FileFormatDataWriter.scala:262)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:278)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1460)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:286)
	... 9 more
14:11:48.069 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 176.0 (TID 300) (amp-jenkins-worker-03.amp executor driver): org.apache.spark.SparkException: Task failed while writing rows.
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:296)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:210)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.fs.FileAlreadyExistsException: file:/home/jenkins/workspace/SparkPullRequestBuilder/sql/core/spark-warehouse/org.apache.spark.sql.sources.InsertSuite/t/_temporary/0/_temporary/attempt_20201028141147_0176_m_000000_300/part1=1/part-00000-94133fea-250f-4784-9e98-c1a6d9a38e51.c000.snappy.parquet already exists
	at org.apache.spark.sql.sources.FileExistingTestFileSystem.create(InsertSuite.scala:908)
	at org.apache.parquet.hadoop.util.HadoopOutputFile.create(HadoopOutputFile.java:74)
	at org.apache.parquet.hadoop.ParquetFileWriter.<init>(ParquetFileWriter.java:248)
	at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:390)
	at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:349)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.<init>(ParquetOutputWriter.scala:37)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anon$1.newInstance(ParquetFileFormat.scala:150)
	at org.apache.spark.sql.execution.datasources.DynamicPartitionDataWriter.newOutputWriter(FileFormatDataWriter.scala:241)
	at org.apache.spark.sql.execution.datasources.DynamicPartitionDataWriter.write(FileFormatDataWriter.scala:262)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:278)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1460)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:286)
	... 9 more

14:11:48.069 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 176.0 failed 1 times; aborting job
14:11:48.071 ERROR org.apache.spark.sql.execution.datasources.FileFormatWriter: Aborting job b5155c6e-e8c1-4713-9d54-00088aac2fd2.
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 176.0 failed 1 times, most recent failure: Lost task 0.0 in stage 176.0 (TID 300) (amp-jenkins-worker-03.amp executor driver): org.apache.spark.SparkException: Task failed while writing rows.
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:296)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:210)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.fs.FileAlreadyExistsException: file:/home/jenkins/workspace/SparkPullRequestBuilder/sql/core/spark-warehouse/org.apache.spark.sql.sources.InsertSuite/t/_temporary/0/_temporary/attempt_20201028141147_0176_m_000000_300/part1=1/part-00000-94133fea-250f-4784-9e98-c1a6d9a38e51.c000.snappy.parquet already exists
	at org.apache.spark.sql.sources.FileExistingTestFileSystem.create(InsertSuite.scala:908)
	at org.apache.parquet.hadoop.util.HadoopOutputFile.create(HadoopOutputFile.java:74)
	at org.apache.parquet.hadoop.ParquetFileWriter.<init>(ParquetFileWriter.java:248)
	at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:390)
	at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:349)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.<init>(ParquetOutputWriter.scala:37)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anon$1.newInstance(ParquetFileFormat.scala:150)
	at org.apache.spark.sql.execution.datasources.DynamicPartitionDataWriter.newOutputWriter(FileFormatDataWriter.scala:241)
	at org.apache.spark.sql.execution.datasources.DynamicPartitionDataWriter.write(FileFormatDataWriter.scala:262)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:278)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1460)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:286)
	... 9 more

Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2211)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2160)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2159)
	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2159)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1076)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1076)
	at scala.Option.foreach(Option.scala:407)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1076)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2398)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2340)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2329)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:866)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2128)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:200)
	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:178)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:131)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:175)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:213)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:210)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:171)
	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:127)
	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:126)
	at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:985)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:769)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
	at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:985)
	at org.apache.spark.sql.DataFrameWriter.insertInto(DataFrameWriter.scala:541)
	at org.apache.spark.sql.DataFrameWriter.insertInto(DataFrameWriter.scala:496)
	at org.apache.spark.sql.sources.InsertSuite.$anonfun$new$163(InsertSuite.scala:842)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.scalatest.Assertions.intercept(Assertions.scala:749)
	at org.scalatest.Assertions.intercept$(Assertions.scala:746)
	at org.scalatest.funsuite.AnyFunSuite.intercept(AnyFunSuite.scala:1562)
	at org.apache.spark.sql.sources.InsertSuite.$anonfun$new$162(InsertSuite.scala:841)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.sql.test.SQLTestUtilsBase.withTable(SQLTestUtils.scala:305)
	at org.apache.spark.sql.test.SQLTestUtilsBase.withTable$(SQLTestUtils.scala:303)
	at org.apache.spark.sql.sources.InsertSuite.withTable(InsertSuite.scala:57)
	at org.apache.spark.sql.sources.InsertSuite.$anonfun$new$161(InsertSuite.scala:833)
	at org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf(SQLHelper.scala:54)
	at org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf$(SQLHelper.scala:38)
	at org.apache.spark.sql.sources.InsertSuite.org$apache$spark$sql$test$SQLTestUtilsBase$$super$withSQLConf(InsertSuite.scala:57)
	at org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf(SQLTestUtils.scala:246)
	at org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf$(SQLTestUtils.scala:244)
	at org.apache.spark.sql.sources.InsertSuite.withSQLConf(InsertSuite.scala:57)
	at org.apache.spark.sql.sources.InsertSuite.$anonfun$new$160(InsertSuite.scala:833)
	at org.apache.spark.sql.sources.InsertSuite.$anonfun$new$160$adapted(InsertSuite.scala:829)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.apache.spark.sql.sources.InsertSuite.$anonfun$new$159(InsertSuite.scala:829)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.funsuite.AnyFunSuiteLike$$anon$1.apply(AnyFunSuiteLike.scala:189)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:176)
	at org.scalatest.funsuite.AnyFunSuiteLike.invokeWithFixture$1(AnyFunSuiteLike.scala:187)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTest$1(AnyFunSuiteLike.scala:199)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTest(AnyFunSuiteLike.scala:199)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTest$(AnyFunSuiteLike.scala:181)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:61)
	at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:234)
	at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:227)
	at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:61)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTests$1(AnyFunSuiteLike.scala:232)
	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:413)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:396)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:475)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTests(AnyFunSuiteLike.scala:232)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTests$(AnyFunSuiteLike.scala:231)
	at org.scalatest.funsuite.AnyFunSuite.runTests(AnyFunSuite.scala:1562)
	at org.scalatest.Suite.run(Suite.scala:1112)
	at org.scalatest.Suite.run$(Suite.scala:1094)
	at org.scalatest.funsuite.AnyFunSuite.org$scalatest$funsuite$AnyFunSuiteLike$$super$run(AnyFunSuite.scala:1562)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$run$1(AnyFunSuiteLike.scala:236)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:535)
	at org.scalatest.funsuite.AnyFunSuiteLike.run(AnyFunSuiteLike.scala:236)
	at org.scalatest.funsuite.AnyFunSuiteLike.run$(AnyFunSuiteLike.scala:235)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:61)
	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:61)
	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:318)
	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:513)
	at sbt.ForkMain$Run.lambda$runTest$1(ForkMain.java:413)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.SparkException: Task failed while writing rows.
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:296)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:210)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	... 3 more
Caused by: org.apache.hadoop.fs.FileAlreadyExistsException: file:/home/jenkins/workspace/SparkPullRequestBuilder/sql/core/spark-warehouse/org.apache.spark.sql.sources.InsertSuite/t/_temporary/0/_temporary/attempt_20201028141147_0176_m_000000_300/part1=1/part-00000-94133fea-250f-4784-9e98-c1a6d9a38e51.c000.snappy.parquet already exists
	at org.apache.spark.sql.sources.FileExistingTestFileSystem.create(InsertSuite.scala:908)
	at org.apache.parquet.hadoop.util.HadoopOutputFile.create(HadoopOutputFile.java:74)
	at org.apache.parquet.hadoop.ParquetFileWriter.<init>(ParquetFileWriter.java:248)
	at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:390)
	at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:349)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.<init>(ParquetOutputWriter.scala:37)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anon$1.newInstance(ParquetFileFormat.scala:150)
	at org.apache.spark.sql.execution.datasources.DynamicPartitionDataWriter.newOutputWriter(FileFormatDataWriter.scala:241)
	at org.apache.spark.sql.execution.datasources.DynamicPartitionDataWriter.write(FileFormatDataWriter.scala:262)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:278)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1460)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:286)
	... 9 more
[info] - Stop task set if FileAlreadyExistsException was thrown (358 milliseconds)
14:11:48.091 WARN org.apache.spark.util.HadoopFSUtils: The directory file:/home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/spark-3213bcb8-1a39-4927-8ba5-13d87dd9c2c5 was not found. Was it deleted very recently?
14:11:48.095 WARN org.apache.spark.util.HadoopFSUtils: The directory file:/home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/spark-3213bcb8-1a39-4927-8ba5-13d87dd9c2c5 was not found. Was it deleted very recently?
[info] - SPARK-29174 Support LOCAL in INSERT OVERWRITE DIRECTORY to data source (798 milliseconds)
[info] - SPARK-29174 fail LOCAL in INSERT OVERWRITE DIRECT remote path (2 milliseconds)
[info] - SPARK-32508 Disallow empty part col values in partition spec before static partition writing (712 milliseconds)
14:11:49.634 WARN org.apache.spark.sql.sources.InsertSuite: 

===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.sources.InsertSuite, thread names: shuffle-boss-2390-1, rpc-boss-2387-1 =====

[info] DataFrameSetOperationsSuite:
[info] - except (3 seconds, 468 milliseconds)
[info] - SPARK-23274: except between two projects without references used in filter (819 milliseconds)
[info] - except distinct - SQL compliance (397 milliseconds)
[info] - except - nullability (1 second, 650 milliseconds)
[info] - except all (4 seconds, 536 milliseconds)
[info] - exceptAll - nullability (2 seconds, 121 milliseconds)
[info] - intersect (2 seconds, 574 milliseconds)
[info] - intersect - nullability (1 second, 638 milliseconds)
[info] - intersectAll (3 seconds, 642 milliseconds)
[info] - intersectAll - nullability (1 second, 822 milliseconds)
[info] - SPARK-10539: Project should not be pushed down through Intersect or Except (455 milliseconds)
[info] - SPARK-10740: handle nondeterministic expressions correctly for set operations (1 second, 509 milliseconds)
[info] - SPARK-17123: Performing set operations that combine non-scala native types (475 milliseconds)
[info] - SPARK-19893: cannot run set operations with map type (20 milliseconds)
[info] - union all (1 second, 291 milliseconds)
[info] - union should union DataFrames with UDTs (SPARK-13410) (296 milliseconds)
[info] - union by name (227 milliseconds)
[info] - union by name - type coercion (626 milliseconds)
[info] - union by name - check case sensitivity (123 milliseconds)
[info] - union by name - check name duplication (43 milliseconds)
[info] - SPARK-25368 Incorrect predicate pushdown returns wrong result (748 milliseconds)
[info] - SPARK-29358: Make unionByName optionally fill missing columns with nulls (632 milliseconds)
[info] - SPARK-32376: Make unionByName null-filling behavior work with struct columns - simple (449 milliseconds)
[info] - SPARK-32376: Make unionByName null-filling behavior work with struct columns - nested (570 milliseconds)
[info] - SPARK-32376: Make unionByName null-filling behavior work with struct columns - case-sensitive cases (514 milliseconds)
[info] - SPARK-32376: Make unionByName null-filling behavior work with struct columns - edge case (150 milliseconds)
[info] - SPARK-32376: Make unionByName null-filling behavior work with struct columns - deep expr (2 seconds, 686 milliseconds)
14:12:23.243 WARN org.apache.spark.sql.DataFrameSetOperationsSuite: 

===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.DataFrameSetOperationsSuite, thread names: shuffle-boss-2396-1, rpc-boss-2393-1 =====

[info] SparkPlannerSuite:
[info] - Ensure to go down only the first branch, not any other possible branches (78 milliseconds)
14:12:23.414 WARN org.apache.spark.sql.execution.SparkPlannerSuite: 

===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.execution.SparkPlannerSuite, thread names: rpc-boss-2399-1, shuffle-boss-2402-1 =====

[info] DatasetSerializerRegistratorSuite:
[info] - Kryo registrator (56 milliseconds)
14:12:23.538 WARN org.apache.spark.sql.DatasetSerializerRegistratorSuite: 

===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.DatasetSerializerRegistratorSuite, thread names: shuffle-boss-2408-1, rpc-boss-2405-1 =====

[info] StreamingQueryStatusAndProgressSuite:
[info] - StreamingQueryProgress - prettyJson (1 millisecond)
[info] - StreamingQueryProgress - json (0 milliseconds)
[info] - StreamingQueryProgress - toString (1 millisecond)
[info] - StreamingQueryStatus - prettyJson (0 milliseconds)
[info] - StreamingQueryStatus - json (0 milliseconds)
[info] - StreamingQueryStatus - toString (0 milliseconds)
14:12:23.598 WARN org.apache.spark.sql.streaming.StreamingQueryManager: Temporary checkpoint location created which is deleted normally when the query didn't fail: /home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/temporary-a16decc0-a389-41c3-a5c7-f91e95175edf. If it's required to delete it under any circumstances, please set spark.sql.streaming.forceDeleteTempCheckpointLocation to true. Important to know deleting temp checkpoint folder is best effort.
[info] - progress classes should be Serializable (871 milliseconds)
14:12:24.477 WARN org.apache.spark.sql.streaming.StreamingQueryManager: Temporary checkpoint location created which is deleted normally when the query didn't fail: /home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/temporary-0e5c340d-2d6c-4995-8905-a9602ddb1865. If it's required to delete it under any circumstances, please set spark.sql.streaming.forceDeleteTempCheckpointLocation to true. Important to know deleting temp checkpoint folder is best effort.
[info] - SPARK-19378: Continue reporting stateOp metrics even if there is no active trigger (567 milliseconds)
[info] - SPARK-29973: Make `processedRowsPerSecond` calculated more accurately and meaningfully (149 milliseconds)
14:12:25.195 WARN org.apache.spark.sql.streaming.StreamingQueryStatusAndProgressSuite: 

===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.streaming.StreamingQueryStatusAndProgressSuite, thread names: state-store-maintenance-task, shuffle-boss-2414-1, rpc-boss-2411-1 =====

[info] PartitionBatchPruningSuite:
[info] - SELECT key FROM pruningData WHERE key = 1 (100 milliseconds)
[info] - SELECT key FROM pruningData WHERE 1 = key (57 milliseconds)
[info] - SELECT key FROM pruningData WHERE key <=> 1 (63 milliseconds)
[info] - SELECT key FROM pruningData WHERE 1 <=> key (57 milliseconds)
[info] - SELECT key FROM pruningData WHERE key < 12 (62 milliseconds)
[info] - SELECT key FROM pruningData WHERE key <= 11 (62 milliseconds)
[info] - SELECT key FROM pruningData WHERE key > 88 (68 milliseconds)
[info] - SELECT key FROM pruningData WHERE key >= 89 (56 milliseconds)
[info] - SELECT key FROM pruningData WHERE 12 > key (55 milliseconds)
[info] - SELECT key FROM pruningData WHERE 11 >= key (53 milliseconds)
[info] - SELECT key FROM pruningData WHERE 88 < key (49 milliseconds)
[info] - SELECT key FROM pruningData WHERE 89 <= key (50 milliseconds)
[info] - SELECT _1 FROM pruningArrayData WHERE _1 = array(1) (89 milliseconds)
[info] - SELECT _1 FROM pruningArrayData WHERE _1 <= array(1) (55 milliseconds)
[info] - SELECT _1 FROM pruningArrayData WHERE _1 >= array(1) (61 milliseconds)
[info] - SELECT _1 FROM pruningBinaryData WHERE _1 == binary(chr(1)) (80 milliseconds)
[info] - SELECT key FROM pruningData WHERE value IS NULL (71 milliseconds)
[info] - SELECT key FROM pruningData WHERE value IS NOT NULL (68 milliseconds)
[info] - SELECT key FROM pruningData WHERE key > 8 AND key <= 21 (62 milliseconds)
[info] - SELECT key FROM pruningData WHERE key < 2 OR key > 99 (64 milliseconds)
[info] - SELECT key FROM pruningData WHERE key < 12 AND key IS NOT NULL (51 milliseconds)
[info] - SELECT key FROM pruningData WHERE key < 2 OR (key > 78 AND key < 92) (61 milliseconds)
[info] - SELECT key FROM pruningData WHERE NOT (key < 88) (62 milliseconds)
[info] - SELECT key FROM pruningData WHERE key IN (1) (52 milliseconds)
[info] - SELECT key FROM pruningData WHERE key IN (1, 2) (61 milliseconds)
[info] - SELECT key FROM pruningData WHERE key IN (1, 11) (69 milliseconds)
[info] - SELECT key FROM pruningData WHERE key IN (1, 21, 41, 61, 81) (86 milliseconds)
[info] - SELECT CAST(s AS INT) FROM pruningStringData WHERE s = '100' (99 milliseconds)
[info] - SELECT CAST(s AS INT) FROM pruningStringData WHERE s < '102' (63 milliseconds)
[info] - SELECT CAST(s AS INT) FROM pruningStringData WHERE s IN ('99', '150', '201') (63 milliseconds)
[info] - SELECT _1 FROM pruningArrayData WHERE _1 IN (array(1), array(2, 2)) (66 milliseconds)
[info] - SELECT key FROM pruningData WHERE key IN (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30) (63 milliseconds)
[info] - SELECT key FROM pruningData WHERE NOT (key IN (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30)) (56 milliseconds)
[info] - SELECT key FROM pruningData WHERE NOT (key IN (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30)) AND key > 88 (59 milliseconds)
[info] - SELECT CAST(s AS INT) FROM pruningStringData WHERE s like '18%' (75 milliseconds)
[info] - SELECT CAST(s AS INT) FROM pruningStringData WHERE s like '%' (60 milliseconds)
[info] - SELECT CAST(s AS INT) FROM pruningStringData WHERE '18%' like s (73 milliseconds)
[info] - disable IN_MEMORY_PARTITION_PRUNING (45 milliseconds)
14:12:31.082 WARN org.apache.spark.sql.execution.columnar.PartitionBatchPruningSuite: 

===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.execution.columnar.PartitionBatchPruningSuite, thread names: block-manager-storage-async-thread-pool-7, block-manager-storage-async-thread-pool-19, block-manager-storage-async-thread-pool-89, block-manager-storage-async-thread-pool-61, block-manager-storage-async-thread-pool-37, block-manager-storage-async-thread-pool-84, block-manager-storage-async-thread-pool-78, block-manager-storage-async-thread-pool-50, block-manager-storage-async-thread-pool-73, block-manager-storage-async-thread-pool-2, block-manager-storage-async-thread-pool-68, block-manager-storage-async-thread-pool-80, block-manager-storage-async-thread-pool-33, block-manager-storage-async-thread-pool-91, block-manager-storage-async-thread-pool-57, block-manager-storage-async-thread-pool-44, block-manager-storage-async-thread-pool-74, block-manager-storage-async-thread-pool-30, block-manager-storage-async-thread-pool-26, block-manager-storage-async-thread-pool-41, block-manager-storage-async-thread-pool-96, block-manager-storage-async-thread-pool-15, block-manager-storage-async-thread-pool-25, block-manager-storage-async-thread-pool-79, block-manager-storage-async-thread-pool-20, block-manager-storage-async-thread-pool-51, block-manager-storage-async-thread-pool-83, block-manager-storage-async-thread-pool-42, block-manager-storage-async-thread-pool-72, block-manager-storage-async-thread-pool-36, block-manager-storage-async-thread-pool-62, block-manager-storage-async-thread-pool-31, block-manager-storage-async-thread-pool-67, block-manager-storage-async-thread-pool-56, block-manager-storage-async-thread-pool-24, block-manager-storage-async-thread-pool-95, block-manager-storage-async-thread-pool-47, block-manager-storage-async-thread-pool-0, block-manager-storage-async-thread-pool-59, block-manager-storage-async-thread-pool-13, block-manager-storage-async-thread-pool-29, block-manager-storage-async-thread-pool-94, block-manager-storage-async-thread-pool-9, block-manager-storage-async-thread-pool-39, rpc-boss-2417-1, block-manager-storage-async-thread-pool-32, block-manager-storage-async-thread-pool-12, block-manager-storage-async-thread-pool-71, block-manager-storage-async-thread-pool-28, block-manager-storage-async-thread-pool-63, block-manager-storage-async-thread-pool-23, block-manager-storage-async-thread-pool-82, shuffle-boss-2420-1, block-manager-storage-async-thread-pool-55, block-manager-storage-async-thread-pool-46, block-manager-storage-async-thread-pool-17, block-manager-storage-async-thread-pool-18, block-manager-storage-async-thread-pool-76, block-manager-storage-async-thread-pool-66, block-manager-storage-async-thread-pool-98, block-manager-storage-async-thread-pool-35, block-manager-storage-async-thread-pool-8, block-manager-storage-async-thread-pool-99, block-manager-storage-async-thread-pool-60, block-manager-storage-async-thread-pool-88, block-manager-storage-async-thread-pool-54, block-manager-storage-async-thread-pool-77, block-manager-storage-async-thread-pool-3, block-manager-storage-async-thread-pool-10, block-manager-storage-async-thread-pool-21, block-manager-storage-async-thread-pool-49, block-manager-storage-async-thread-pool-85, block-manager-storage-async-thread-pool-90, block-manager-storage-async-thread-pool-45, block-manager-storage-async-thread-pool-34, block-manager-storage-async-thread-pool-92, block-manager-storage-async-thread-pool-81, block-manager-storage-async-thread-pool-43, block-manager-storage-async-thread-pool-69, block-manager-storage-async-thread-pool-11, block-manager-storage-async-thread-pool-16, block-manager-storage-async-thread-pool-22, block-manager-storage-async-thread-pool-97, block-manager-storage-async-thread-pool-4, block-manager-storage-async-thread-pool-64, block-manager-storage-async-thread-pool-53, block-manager-storage-async-thread-pool-27, block-manager-storage-async-thread-pool-86, block-manager-storage-async-thread-pool-58, block-manager-storage-async-thread-pool-70, block-manager-storage-async-thread-pool-40 =====

[info] FileStreamSinkLogSuite:
[info] - shouldRetain (13 milliseconds)
[info] - serialize (5 milliseconds)
[info] - deserialize (5 milliseconds)
[info] - compact (305 milliseconds)
[info] - delete expired file (391 milliseconds)
[info] - read Spark 2.1.0 log format (5 milliseconds)
14:12:31.867 WARN org.apache.spark.sql.execution.streaming.CheckpointFileManager: Could not use FileContext API for managing Structured Streaming checkpoint files at FileStreamSinkLogSuite251584963fs:/home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/spark-3736536e-16a8-4d88-b685-23fbc22b8d9c. Using FileSystem API instead for managing log files. If the implementation of FileSystem.rename() is not atomic, then the correctness and fault-tolerance ofyour Structured Streaming is not guaranteed.
[info] - getLatestBatchId (31 milliseconds)
14:12:31.931 WARN org.apache.spark.sql.execution.streaming.FileStreamSinkLogSuite: 

===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.execution.streaming.FileStreamSinkLogSuite, thread names: shuffle-boss-2426-1, rpc-boss-2423-1 =====

[info] VectorizedOrcReadSchemaSuite:
[info] - append column at the end (729 milliseconds)
[info] - hide column at the end (616 milliseconds)
[info] - append column into middle (466 milliseconds)
[info] - hide column in the middle (462 milliseconds)
[info] - add a nested column at the end of the leaf struct column (461 milliseconds)
[info] - add a nested column in the middle of the leaf struct column (430 milliseconds)
[info] - add a nested column at the end of the middle struct column (535 milliseconds)
[info] - add a nested column in the middle of the middle struct column (708 milliseconds)
[info] - hide a nested column at the end of the leaf struct column (651 milliseconds)
[info] - hide a nested column in the middle of the leaf struct column (625 milliseconds)
[info] - hide a nested column at the end of the middle struct column (613 milliseconds)
[info] - hide a nested column in the middle of the middle struct column (692 milliseconds)
[info] - change column position (584 milliseconds)
[info] - change column type from boolean to byte/short/int/long (760 milliseconds)
[info] - change column type from byte to short/int/long (526 milliseconds)
[info] - change column type from short to int/long (401 milliseconds)
[info] - change column type from int to long (309 milliseconds)
[info] - read byte, int, short, long together (891 milliseconds)
[info] - change column type from float to double (346 milliseconds)
[info] - read float and double together (488 milliseconds)
14:12:43.326 WARN org.apache.spark.sql.execution.datasources.VectorizedOrcReadSchemaSuite: 

===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.execution.datasources.VectorizedOrcReadSchemaSuite, thread names: block-manager-storage-async-thread-pool-48, block-manager-storage-async-thread-pool-33, block-manager-storage-async-thread-pool-20, block-manager-storage-async-thread-pool-67, block-manager-storage-async-thread-pool-94, block-manager-storage-async-thread-pool-23, block-manager-storage-async-thread-pool-76, rpc-boss-2429-1, block-manager-storage-async-thread-pool-88, block-manager-storage-async-thread-pool-49, block-manager-storage-async-thread-pool-85, shuffle-boss-2432-1, block-manager-storage-async-thread-pool-81, block-manager-storage-async-thread-pool-97 =====

[info] ComplexTypesSuite:
[info] - simple case (142 milliseconds)
[info] - named_struct is used in the top Project (486 milliseconds)
[info] - expression in named_struct (341 milliseconds)
[info] - nested case (296 milliseconds)
[info] - SPARK-32167: get field from an array of struct (101 milliseconds)
14:12:45.006 WARN org.apache.spark.sql.ComplexTypesSuite: 

===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.ComplexTypesSuite, thread names: rpc-boss-2435-1, shuffle-boss-2438-1 =====

[info] HashedRelationSuite:
[info] - UnsafeHashedRelation (49 milliseconds)
[info] - test serialization empty hash map (1 millisecond)
[info] - LongToUnsafeRowMap (15 milliseconds)
[info] - LongToUnsafeRowMap with very wide range (2 milliseconds)
[info] - LongToUnsafeRowMap with random keys (2 seconds, 505 milliseconds)
[info] - SPARK-24257: insert big values into LongToUnsafeRowMap (18 milliseconds)
[info] - SPARK-24809: Serializing LongToUnsafeRowMap in executor may result in data error (8 milliseconds)
[info] - Spark-14521 (71 milliseconds)
[info] - SPARK-31511: Make BytesToBytesMap iterators thread-safe (100 milliseconds)
[info] - build HashedRelation that is larger than 1G !!! IGNORED !!!
[info] - build HashedRelation with more than 100 millions rows !!! IGNORED !!!
[info] - UnsafeHashedRelation: key set iterator on a contiguous array of keys (34 milliseconds)
[info] - UnsafeHashedRelation: key set iterator on a sparse array of keys (24 milliseconds)
[info] - LongHashedRelation: key set iterator on a contiguous array of keys (4 milliseconds)
[info] - LongToUnsafeRowMap: key set iterator on a contiguous array of keys (14 milliseconds)
[info] - LongToUnsafeRowMap: key set iterator on a sparse array with equidistant keys (2 milliseconds)
[info] - LongToUnsafeRowMap: key set iterator on an array with a single key (880 milliseconds)
[info] - LongToUnsafeRowMap: multiple hasNext calls before calling next() on the key iterator (8 milliseconds)
[info] - LongToUnsafeRowMap: no explicit hasNext calls on the key iterator (5 milliseconds)
[info] - LongToUnsafeRowMap: call hasNext at the end of the iterator (2 milliseconds)
[info] - LongToUnsafeRowMap: random sequence of hasNext and next() calls on the key iterator (4 milliseconds)
[info] - HashJoin: packing and unpacking with the same key type in a LongType (29 milliseconds)
[info] - HashJoin: packing and unpacking with various key types in a LongType (13 milliseconds)
[info] - EmptyHashedRelation override methods behavior test (0 milliseconds)
[info] - SPARK-32399: test methods related to key index (31 milliseconds)
14:12:48.944 WARN org.apache.spark.sql.execution.joins.HashedRelationSuite: 

===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.execution.joins.HashedRelationSuite, thread names: shuffle-boss-2444-1, rpc-boss-2441-1 =====

[info] JDBCWriteSuite:
[info] - Basic CREATE (229 milliseconds)
[info] - Basic CREATE with illegal batchsize (16 milliseconds)
[info] - Basic CREATE with batchsize (298 milliseconds)
[info] - CREATE with ignore (235 milliseconds)
[info] - CREATE with overwrite (230 milliseconds)
[info] - CREATE then INSERT to append (210 milliseconds)
[info] - SPARK-18123 Append with column names with different cases (188 milliseconds)
[info] - Truncate (218 milliseconds)
[info] - createTableOptions (11 milliseconds)
[info] - Incompatible INSERT to append (55 milliseconds)
[info] - INSERT to JDBC Datasource (125 milliseconds)
[info] - INSERT to JDBC Datasource with overwrite (166 milliseconds)
[info] - save works for format("jdbc") if url and dbtable are set (102 milliseconds)
[info] - save API with SaveMode.Overwrite (199 milliseconds)
[info] - save errors if url is not specified (9 milliseconds)
[info] - save errors if dbtable is not specified (18 milliseconds)
[info] - save errors if wrong user/password combination (743 milliseconds)
[info] - save errors if partitionColumn and numPartitions and bounds not set (9 milliseconds)
[info] - SPARK-18433: Improve DataSource option keys to be more case-insensitive (44 milliseconds)
[info] - SPARK-18413: Use `numPartitions` JDBCOption (11 milliseconds)
[info] - SPARK-19318 temporary view data source option keys should be case-insensitive (100 milliseconds)
[info] - SPARK-10849: test schemaString - from createTableColumnTypes option values (14 milliseconds)
[info] - SPARK-10849: create table using user specified column type and verify on target table (328 milliseconds)
[info] - SPARK-10849: jdbc CreateTableColumnTypes option with invalid data type (9 milliseconds)
[info] - SPARK-10849: jdbc CreateTableColumnTypes option with invalid syntax (9 milliseconds)
[info] - SPARK-10849: jdbc CreateTableColumnTypes duplicate columns (11 milliseconds)
[info] - SPARK-10849: jdbc CreateTableColumnTypes invalid columns (18 milliseconds)
14:12:53.559 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 76.0 (TID 98)
org.h2.jdbc.JdbcBatchUpdateException: NULL not allowed for column "NAME"; SQL statement:
INSERT INTO TEST.PEOPLE1 ("NAME","THEID") VALUES (?,?) [23502-195]
	at org.h2.jdbc.JdbcPreparedStatement.executeBatch(JdbcPreparedStatement.java:1234)
	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:679)
	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$saveTable$1(JdbcUtils.scala:853)
	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$saveTable$1$adapted(JdbcUtils.scala:851)
	at org.apache.spark.rdd.RDD.$anonfun$foreachPartition$2(RDD.scala:1020)
	at org.apache.spark.rdd.RDD.$anonfun$foreachPartition$2$adapted(RDD.scala:1020)
	at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2168)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.h2.jdbc.JdbcSQLException: NULL not allowed for column "NAME"; SQL statement:
INSERT INTO TEST.PEOPLE1 ("NAME","THEID") VALUES (?,?) [23502-195]
	at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
	at org.h2.message.DbException.get(DbException.java:179)
	at org.h2.message.DbException.get(DbException.java:155)
	at org.h2.table.Column.validateConvertUpdateSequence(Column.java:345)
	at org.h2.table.Table.validateConvertUpdateSequence(Table.java:793)
	at org.h2.command.dml.Insert.insertRows(Insert.java:151)
	at org.h2.command.dml.Insert.update(Insert.java:114)
	at org.h2.command.CommandContainer.update(CommandContainer.java:101)
	at org.h2.command.Command.executeUpdate(Command.java:260)
	at org.h2.jdbc.JdbcPreparedStatement.executeUpdateInternal(JdbcPreparedStatement.java:164)
	at org.h2.jdbc.JdbcPreparedStatement.executeBatch(JdbcPreparedStatement.java:1215)
	... 14 more
org.h2.jdbc.JdbcSQLException: NULL not allowed for column "NAME"; SQL statement:
INSERT INTO TEST.PEOPLE1 ("NAME","THEID") VALUES (?,?) [23502-195]
	at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
	at org.h2.message.DbException.get(DbException.java:179)
	at org.h2.message.DbException.get(DbException.java:155)
	at org.h2.table.Column.validateConvertUpdateSequence(Column.java:345)
	at org.h2.table.Table.validateConvertUpdateSequence(Table.java:793)
	at org.h2.command.dml.Insert.insertRows(Insert.java:151)
	at org.h2.command.dml.Insert.update(Insert.java:114)
	at org.h2.command.CommandContainer.update(CommandContainer.java:101)
	at org.h2.command.Command.executeUpdate(Command.java:260)
	at org.h2.jdbc.JdbcPreparedStatement.executeUpdateInternal(JdbcPreparedStatement.java:164)
	at org.h2.jdbc.JdbcPreparedStatement.executeBatch(JdbcPreparedStatement.java:1215)
	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:679)
	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$saveTable$1(JdbcUtils.scala:853)
	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$saveTable$1$adapted(JdbcUtils.scala:851)
	at org.apache.spark.rdd.RDD.$anonfun$foreachPartition$2(RDD.scala:1020)
	at org.apache.spark.rdd.RDD.$anonfun$foreachPartition$2$adapted(RDD.scala:1020)
	at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2168)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
14:12:53.567 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 76.0 (TID 98) (amp-jenkins-worker-03.amp executor driver): org.h2.jdbc.JdbcBatchUpdateException: NULL not allowed for column "NAME"; SQL statement:
INSERT INTO TEST.PEOPLE1 ("NAME","THEID") VALUES (?,?) [23502-195]
	at org.h2.jdbc.JdbcPreparedStatement.executeBatch(JdbcPreparedStatement.java:1234)
	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:679)
	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$saveTable$1(JdbcUtils.scala:853)
	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$saveTable$1$adapted(JdbcUtils.scala:851)
	at org.apache.spark.rdd.RDD.$anonfun$foreachPartition$2(RDD.scala:1020)
	at org.apache.spark.rdd.RDD.$anonfun$foreachPartition$2$adapted(RDD.scala:1020)
	at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2168)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.h2.jdbc.JdbcSQLException: NULL not allowed for column "NAME"; SQL statement:
INSERT INTO TEST.PEOPLE1 ("NAME","THEID") VALUES (?,?) [23502-195]
	at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
	at org.h2.message.DbException.get(DbException.java:179)
	at org.h2.message.DbException.get(DbException.java:155)
	at org.h2.table.Column.validateConvertUpdateSequence(Column.java:345)
	at org.h2.table.Table.validateConvertUpdateSequence(Table.java:793)
	at org.h2.command.dml.Insert.insertRows(Insert.java:151)
	at org.h2.command.dml.Insert.update(Insert.java:114)
	at org.h2.command.CommandContainer.update(CommandContainer.java:101)
	at org.h2.command.Command.executeUpdate(Command.java:260)
	at org.h2.jdbc.JdbcPreparedStatement.executeUpdateInternal(JdbcPreparedStatement.java:164)
	at org.h2.jdbc.JdbcPreparedStatement.executeBatch(JdbcPreparedStatement.java:1215)
	... 14 more
org.h2.jdbc.JdbcSQLException: NULL not allowed for column "NAME"; SQL statement:
INSERT INTO TEST.PEOPLE1 ("NAME","THEID") VALUES (?,?) [23502-195]
	at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
	at org.h2.message.DbException.get(DbException.java:179)
	at org.h2.message.DbException.get(DbException.java:155)
	at org.h2.table.Column.validateConvertUpdateSequence(Column.java:345)
	at org.h2.table.Table.validateConvertUpdateSequence(Table.java:793)
	at org.h2.command.dml.Insert.insertRows(Insert.java:151)
	at org.h2.command.dml.Insert.update(Insert.java:114)
	at org.h2.command.CommandContainer.update(CommandContainer.java:101)
	at org.h2.command.Command.executeUpdate(Command.java:260)
	at org.h2.jdbc.JdbcPreparedStatement.executeUpdateInternal(JdbcPreparedStatement.java:164)
	at org.h2.jdbc.JdbcPreparedStatement.executeBatch(JdbcPreparedStatement.java:1215)
	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:679)
	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$saveTable$1(JdbcUtils.scala:853)
	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$saveTable$1$adapted(JdbcUtils.scala:851)
	at org.apache.spark.rdd.RDD.$anonfun$foreachPartition$2(RDD.scala:1020)
	at org.apache.spark.rdd.RDD.$anonfun$foreachPartition$2$adapted(RDD.scala:1020)
	at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2168)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

14:12:53.567 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 76.0 failed 1 times; aborting job
[info] - SPARK-19726: INSERT null to a NOT NULL column (76 milliseconds)
[info] - SPARK-23856 Spark jdbc setQueryTimeout option !!! IGNORED !!!
[info] - metrics (187 milliseconds)
14:12:53.824 WARN org.apache.spark.sql.jdbc.JDBCWriteSuite: 

===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.jdbc.JDBCWriteSuite, thread names: Generate Seed, shuffle-boss-2450-1, rpc-boss-2447-1 =====

[info] ExplainSuiteAE:
[info] - Explain formatted (365 milliseconds)
14:12:54.283 WARN org.apache.spark.sql.ExplainSuiteAE: 

===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.ExplainSuiteAE, thread names: QueryStageCreator-133, shuffle-boss-2456-1, rpc-boss-2453-1, QueryStageCreator-135, QueryStageCreator-136, QueryStageCreator-134 =====

[info] OuterJoinSuite:
[info] - basic left outer join using ShuffledHashJoin (148 milliseconds)
[info] - basic left outer join using BroadcastHashJoin (whole-stage-codegen off) (91 milliseconds)
[info] - basic left outer join using BroadcastHashJoin (whole-stage-codegen on) (66 milliseconds)
[info] - basic left outer join using SortMergeJoin (131 milliseconds)
[info] - basic left outer join using BroadcastNestedLoopJoin build left (107 milliseconds)
[info] - basic left outer join using BroadcastNestedLoopJoin build right (49 milliseconds)
[info] - basic right outer join using ShuffledHashJoin (52 milliseconds)
[info] - basic right outer join using BroadcastHashJoin (whole-stage-codegen off) (67 milliseconds)
[info] - basic right outer join using BroadcastHashJoin (whole-stage-codegen on) (72 milliseconds)
[info] - basic right outer join using SortMergeJoin (94 milliseconds)
[info] - basic right outer join using BroadcastNestedLoopJoin build left (47 milliseconds)
[info] - basic right outer join using BroadcastNestedLoopJoin build right (84 milliseconds)
[info] - basic full outer join using SortMergeJoin (92 milliseconds)
[info] - basic full outer join using BroadcastNestedLoopJoin build left (83 milliseconds)
[info] - basic full outer join using BroadcastNestedLoopJoin build right (78 milliseconds)
[info] - left outer join with both inputs empty using ShuffledHashJoin (54 milliseconds)
[info] - left outer join with both inputs empty using BroadcastHashJoin (whole-stage-codegen off) (73 milliseconds)
[info] - left outer join with both inputs empty using BroadcastHashJoin (whole-stage-codegen on) (51 milliseconds)
[info] - left outer join with both inputs empty using SortMergeJoin (53 milliseconds)
[info] - left outer join with both inputs empty using BroadcastNestedLoopJoin build left (41 milliseconds)
[info] - left outer join with both inputs empty using BroadcastNestedLoopJoin build right (38 milliseconds)
[info] - right outer join with both inputs empty using ShuffledHashJoin (55 milliseconds)
[info] - right outer join with both inputs empty using BroadcastHashJoin (whole-stage-codegen off) (46 milliseconds)
[info] - right outer join with both inputs empty using BroadcastHashJoin (whole-stage-codegen on) (46 milliseconds)
[info] - right outer join with both inputs empty using SortMergeJoin (42 milliseconds)
[info] - right outer join with both inputs empty using BroadcastNestedLoopJoin build left (31 milliseconds)
[info] - right outer join with both inputs empty using BroadcastNestedLoopJoin build right (45 milliseconds)
[info] - full outer join with both inputs empty using SortMergeJoin (49 milliseconds)
[info] - full outer join with both inputs empty using BroadcastNestedLoopJoin build left (60 milliseconds)
[info] - full outer join with both inputs empty using BroadcastNestedLoopJoin build right (49 milliseconds)
[info] - left outer join with unique keys using ShuffledHashJoin (57 milliseconds)
[info] - left outer join with unique keys using BroadcastHashJoin (whole-stage-codegen off) (57 milliseconds)
[info] - left outer join with unique keys using BroadcastHashJoin (whole-stage-codegen on) (67 milliseconds)
[info] - left outer join with unique keys using SortMergeJoin (107 milliseconds)
[info] - left outer join with unique keys using BroadcastNestedLoopJoin build left (76 milliseconds)
[info] - left outer join with unique keys using BroadcastNestedLoopJoin build right (62 milliseconds)
[info] - right outer join with unique keys using ShuffledHashJoin (83 milliseconds)
[info] - right outer join with unique keys using BroadcastHashJoin (whole-stage-codegen off) (61 milliseconds)
[info] - right outer join with unique keys using BroadcastHashJoin (whole-stage-codegen on) (71 milliseconds)
[info] - right outer join with unique keys using SortMergeJoin (115 milliseconds)
[info] - right outer join with unique keys using BroadcastNestedLoopJoin build left (55 milliseconds)
[info] - right outer join with unique keys using BroadcastNestedLoopJoin build right (79 milliseconds)
14:12:57.277 WARN org.apache.spark.sql.execution.joins.OuterJoinSuite: 

===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.execution.joins.OuterJoinSuite, thread names: shuffle-boss-2462-1, rpc-boss-2459-1 =====

[info] RowQueueSuite:
[info] - in-memory queue (4 milliseconds)
[info] - disk queue (encryption = off) (8 milliseconds)
[info] - disk queue (encryption = on) (109 milliseconds)
[info] - hybrid queue (encryption = off) (7 milliseconds)
[info] - hybrid queue (encryption = on) (13 milliseconds)
[info] PlannerSuite:
[info] - count is partially aggregated (6 milliseconds)
[info] - count distinct is partially aggregated (4 milliseconds)
[info] - mixed aggregates are partially aggregated (3 milliseconds)
[info] - mixed aggregates with same distinct columns (72 milliseconds)
[info] - sizeInBytes estimation of limit operator for broadcast hash join optimization (95 milliseconds)
[info] - InMemoryRelation statistics propagation (143 milliseconds)
[info] - SPARK-11390 explain should print PushedFilters of PhysicalRDD (278 milliseconds)
[info] - efficient terminal limit -> sort should use TakeOrderedAndProject (30 milliseconds)
[info] - terminal limit -> project -> sort should use TakeOrderedAndProject (31 milliseconds)
[info] - terminal limits that are not handled by TakeOrderedAndProject should use CollectLimit (19 milliseconds)
[info] - TakeOrderedAndProject can appear in the middle of plans (31 milliseconds)
[info] - CollectLimit can appear in the middle of a plan when caching is used (25 milliseconds)
[info] - TakeOrderedAndProjectExec appears only when number of limit is below the threshold. (91 milliseconds)
[info] - PartitioningCollection (127 milliseconds)
[info] - collapse adjacent repartitions (18 milliseconds)
[info] - EnsureRequirements with child partitionings with different numbers of output partitions (3 milliseconds)
[info] - EnsureRequirements with compatible child partitionings that do not satisfy distribution (2 milliseconds)
[info] - EnsureRequirements with compatible child partitionings that satisfy distribution (1 millisecond)
[info] - EnsureRequirements should not repartition if only ordering requirement is unsatisfied (1 millisecond)
[info] - EnsureRequirements eliminates Exchange if child has same partitioning (0 milliseconds)
[info] - EnsureRequirements does not eliminate Exchange with different partitioning (1 millisecond)
[info] - EnsureRequirements should respect ClusteredDistribution's num partitioning (0 milliseconds)
[info] - Reuse exchanges (3 milliseconds)
[info] - EnsureRequirements skips sort when either side of join keys is required after inner SMJ (3 milliseconds)
[info] - EnsureRequirements skips sort when key order of a parent SMJ is propagated from its child SMJ (4 milliseconds)
[info] - EnsureRequirements for sort operator after left outer sort merge join (1 millisecond)
[info] - EnsureRequirements for sort operator after right outer sort merge join (2 milliseconds)
[info] - EnsureRequirements adds sort after full outer sort merge join (1 millisecond)
[info] - EnsureRequirements adds sort when there is no existing ordering (1 millisecond)
[info] - EnsureRequirements skips sort when required ordering is prefix of existing ordering (0 milliseconds)
[info] - EnsureRequirements skips sort when required ordering is semantically equal to existing ordering (0 milliseconds)
[info] - EnsureRequirements adds sort when required ordering isn't a prefix of existing ordering (0 milliseconds)
[info] - SPARK-24242: RangeExec should have correct output ordering and partitioning (37 milliseconds)
[info] - SPARK-24495: EnsureRequirements can return wrong plan when reusing the same key in join (1 millisecond)
[info] - SPARK-27485: EnsureRequirements.reorder should handle duplicate expressions (0 milliseconds)
[info] - SPARK-24500: create union with stream of children (31 milliseconds)
[info] - SPARK-25278: physical nodes should be different instances for same logical nodes (31 milliseconds)
[info] - SPARK-24556: always rewrite output partitioning in ReusedExchangeExec and InMemoryTableScanExec (179 milliseconds)
[info] - SPARK-26812: wrong nullability for complex datatypes in union (0 milliseconds)
[info] - Do not analyze subqueries twice (36 milliseconds)
[info] - aliases in the project should not introduce extra shuffle (61 milliseconds)
[info] - aliases to expressions should not be replaced (63 milliseconds)
[info] - aliases in the aggregate expressions should not introduce extra shuffle (58 milliseconds)
[info] - aliases in the object hash/sort aggregate expressions should not introduce extra shuffle (119 milliseconds)
[info] - aliases in the sort aggregate expressions should not introduce extra sort (60 milliseconds)
[info] - Change the number of partitions to zero when a range is empty (whole-stage-codegen off) (14 milliseconds)
[info] - Change the number of partitions to zero when a range is empty (whole-stage-codegen on) (22 milliseconds)
14:12:59.481 WARN org.apache.spark.sql.execution.PlannerSuite: 

===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.execution.PlannerSuite, thread names: rpc-boss-2465-1, shuffle-boss-2468-1 =====

[info] OrcColumnarBatchReaderSuite:
[info] - all partitions are requested: struct<col1:int,col2:int> (1 millisecond)
[info] - initBatch should initialize requested partition columns only: struct<col1:int,col2:int> (1 millisecond)
[info] - all partitions are requested: struct<col1:int,col2:int,p1:string,p2:string> (1 millisecond)
[info] - initBatch should initialize requested partition columns only: struct<col1:int,col2:int,p1:string,p2:string> (0 milliseconds)
[info] - all partitions are requested: struct<col1:int,col2:int,p1:string> (0 milliseconds)
[info] - initBatch should initialize requested partition columns only: struct<col1:int,col2:int,p1:string> (0 milliseconds)
[info] - all partitions are requested: struct<col1:int,col2:int,p2:string> (0 milliseconds)
[info] - initBatch should initialize requested partition columns only: struct<col1:int,col2:int,p2:string> (1 millisecond)
14:12:59.563 WARN org.apache.spark.sql.execution.datasources.orc.OrcColumnarBatchReaderSuite: 

===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.execution.datasources.orc.OrcColumnarBatchReaderSuite, thread names: shuffle-boss-2474-1, rpc-boss-2471-1 =====

[info] Test run started
[info] Test test.org.apache.spark.sql.JavaUDFSuite.udf1Test started
[info] Test test.org.apache.spark.sql.JavaUDFSuite.udf2Test started
[info] Test test.org.apache.spark.sql.JavaUDFSuite.udf3Test started
[info] Test test.org.apache.spark.sql.JavaUDFSuite.udf4Test started
[info] Test test.org.apache.spark.sql.JavaUDFSuite.udf5Test started
[info] Test test.org.apache.spark.sql.JavaUDFSuite.udf6Test started
[info] Test test.org.apache.spark.sql.JavaUDFSuite.udf7Test started
[info] Test run finished: 0 failed, 0 ignored, 7 total, 1.256s
[info] Test run started
[info] Test test.org.apache.spark.sql.JavaDataFrameReaderWriterSuite.testFormatAPI started
[info] Test test.org.apache.spark.sql.JavaDataFrameReaderWriterSuite.testTextAPI started
14:13:01.059 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored:
  
[info] Test test.org.apache.spark.sql.JavaDataFrameReaderWriterSuite.testJsonAPI started
14:13:01.320 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored:
  
[info] Test test.org.apache.spark.sql.JavaDataFrameReaderWriterSuite.testLoadAPI started
[info] Test test.org.apache.spark.sql.JavaDataFrameReaderWriterSuite.testOptionsAPI started
14:13:01.709 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored:
  
[info] Test test.org.apache.spark.sql.JavaDataFrameReaderWriterSuite.testSaveModeAPI started
[info] Test test.org.apache.spark.sql.JavaDataFrameReaderWriterSuite.testCsvAPI started
14:13:01.882 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored:
  
[info] Test test.org.apache.spark.sql.JavaDataFrameReaderWriterSuite.testParquetAPI started
14:13:02.132 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored:
  
[info] Test test.org.apache.spark.sql.JavaDataFrameReaderWriterSuite.testTextFileAPI started
14:13:02.367 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored:
  
[info] Test run finished: 0 failed, 0 ignored, 9 total, 1.497s
[info] Test run started
[info] Test test.org.apache.spark.sql.Java8DatasetAggregatorSuite.testTypedAggregationCount started
[info] Test test.org.apache.spark.sql.Java8DatasetAggregatorSuite.testTypedAggregationSumDouble started
[info] Test test.org.apache.spark.sql.Java8DatasetAggregatorSuite.testTypedAggregationSumLong started
[info] Test test.org.apache.spark.sql.Java8DatasetAggregatorSuite.testTypedAggregationAverage started
[info] Test run finished: 0 failed, 0 ignored, 4 total, 1.928s
[info] Test run started
[info] Test test.org.apache.spark.sql.JavaUDAFSuite.udf1Test started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.286s
[info] Test run started
[info] Test test.org.apache.spark.sql.JavaDatasetAggregatorSuite.testTypedAggregationCount started
[info] Test test.org.apache.spark.sql.JavaDatasetAggregatorSuite.testTypedAggregationSumDouble started
[info] Test test.org.apache.spark.sql.JavaDatasetAggregatorSuite.testTypedAggregationSumLong started
[info] Test test.org.apache.spark.sql.JavaDatasetAggregatorSuite.testTypedAggregationAnonClass started
[info] Test test.org.apache.spark.sql.JavaDatasetAggregatorSuite.testTypedAggregationAverage started
[info] Test run finished: 0 failed, 0 ignored, 5 total, 2.136s
[info] Test run started
[info] Test test.org.apache.spark.sql.JavaApplySchemaSuite.applySchema started
[info] Test test.org.apache.spark.sql.JavaApplySchemaSuite.dataFrameRDDOperations started
[info] Test test.org.apache.spark.sql.JavaApplySchemaSuite.applySchemaToJSON started
[info] Test run finished: 0 failed, 0 ignored, 3 total, 0.499s
[info] Test run started
[info] Test test.org.apache.spark.sql.JavaColumnExpressionSuite.isInCollectionCheckExceptionMessage started
[info] Test test.org.apache.spark.sql.JavaColumnExpressionSuite.isInCollectionWorksCorrectlyOnJava started
[info] Test run finished: 0 failed, 0 ignored, 2 total, 0.28s
[info] Test run started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testRuntimeNullabilityCheck started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testCircularReferenceBean1 started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testCircularReferenceBean2 started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testCircularReferenceBean3 started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testSerializeNull started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testRandomSplit started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testTypedFilterPreservingSchema started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testLocalDateAndInstantEncoders started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testJoin started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testTake started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testToLocalIterator started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testSpecificLists started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testForeach started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testJavaEncoder started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testNonNullField started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testPrimitiveEncoder started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testEmptyBean started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testCommonOperation started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testNullInTopLevelBean started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testGroupBy started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testSetOperation started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testBeanWithEnum started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testKryoEncoder started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.test started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testJavaBeanEncoder2 started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testCollect started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testKryoEncoderErrorMessageForPrivateClass started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testJavaBeanEncoder started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testTupleEncoder started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testNestedTupleEncoder started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testTupleEncoderSchema started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testReduce started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testSelect started
[info] Test test.org.apache.spark.sql.JavaDatasetSuite.testJavaEncoderErrorMessageForPrivateClass started
[info] Test run finished: 0 failed, 0 ignored, 34 total, 14.38s
[info] Test run started
[info] Test test.org.apache.spark.sql.streaming.JavaDataStreamReaderWriterSuite.testForeachBatchAPI started
14:13:22.000 WARN org.apache.spark.sql.streaming.StreamingQueryManager: Temporary checkpoint location created which is deleted normally when the query didn't fail: /home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/temporary-5b3c8b9f-b8f3-4676-9a94-72a141125b7c. If it's required to delete it under any circumstances, please set spark.sql.streaming.forceDeleteTempCheckpointLocation to true. Important to know deleting temp checkpoint folder is best effort.
[info] Test test.org.apache.spark.sql.streaming.JavaDataStreamReaderWriterSuite.testForeachAPI started
14:13:22.131 WARN org.apache.spark.sql.streaming.StreamingQueryManager: Temporary checkpoint location created which is deleted normally when the query didn't fail: /home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/temporary-85accad6-2d72-4948-9d2b-855c0a7c1ac9. If it's required to delete it under any circumstances, please set spark.sql.streaming.forceDeleteTempCheckpointLocation to true. Important to know deleting temp checkpoint folder is best effort.
[info] Test run finished: 0 failed, 0 ignored, 2 total, 0.252s
[info] Test run started
[info] Test test.org.apache.spark.sql.execution.sort.RecordBinaryComparatorSuite.testBinaryComparatorForSingleColumnRow started
[info] Test test.org.apache.spark.sql.execution.sort.RecordBinaryComparatorSuite.testBinaryComparatorForArrayColumn started
[info] Test test.org.apache.spark.sql.execution.sort.RecordBinaryComparatorSuite.testBinaryComparatorWhenOnlyTheLastColumnDiffers started
[info] Test test.org.apache.spark.sql.execution.sort.RecordBinaryComparatorSuite.testBinaryComparatorForMixedColumns started
[info] Test test.org.apache.spark.sql.execution.sort.RecordBinaryComparatorSuite.testCompareLongsAsUnsigned started
[info] Test test.org.apache.spark.sql.execution.sort.RecordBinaryComparatorSuite.testCompareLongsAsLittleEndian started
[info] Test test.org.apache.spark.sql.execution.sort.RecordBinaryComparatorSuite.testBinaryComparatorForNullColumns started
[info] Test test.org.apache.spark.sql.execution.sort.RecordBinaryComparatorSuite.testBinaryComparatorForMultipleColumnRow started
[info] Test test.org.apache.spark.sql.execution.sort.RecordBinaryComparatorSuite.testBinaryComparatorWhenSubtractionIsDivisibleByMaxIntValue started
[info] Test test.org.apache.spark.sql.execution.sort.RecordBinaryComparatorSuite.testBinaryComparatorWhenSubtractionCanOverflowLongValue started
[info] Test run finished: 0 failed, 0 ignored, 10 total, 0.01s
[info] Test run started
[info] Test test.org.apache.spark.sql.JavaSaveLoadSuite.saveAndLoadWithSchema started
[info] Test test.org.apache.spark.sql.JavaSaveLoadSuite.saveAndLoad started
[info] Test run finished: 0 failed, 0 ignored, 2 total, 1.033s
[info] Test run started
[info] Test test.org.apache.spark.sql.JavaHigherOrderFunctionsSuite.testMapZipWith started
[info] Test test.org.apache.spark.sql.JavaHigherOrderFunctionsSuite.testTransformValues started
[info] Test test.org.apache.spark.sql.JavaHigherOrderFunctionsSuite.testZipWith started
[info] Test test.org.apache.spark.sql.JavaHigherOrderFunctionsSuite.testTransformKeys started
[info] Test test.org.apache.spark.sql.JavaHigherOrderFunctionsSuite.testAggregate started
[info] Test test.org.apache.spark.sql.JavaHigherOrderFunctionsSuite.testMapFilter started
[info] Test test.org.apache.spark.sql.JavaHigherOrderFunctionsSuite.testExists started
[info] Test test.org.apache.spark.sql.JavaHigherOrderFunctionsSuite.testFilter started
[info] Test test.org.apache.spark.sql.JavaHigherOrderFunctionsSuite.testForall started
[info] Test test.org.apache.spark.sql.JavaHigherOrderFunctionsSuite.testTransform started
[info] Test run finished: 0 failed, 0 ignored, 10 total, 0.925s
[info] Test run started
[info] Test test.org.apache.spark.sql.JavaBeanDeserializationSuite.testBeanWithArrayFieldDeserialization started
[info] Test test.org.apache.spark.sql.JavaBeanDeserializationSuite.testSpark22000FailToUpcast started
[info] Test test.org.apache.spark.sql.JavaBeanDeserializationSuite.testSpark22000 started
[info] Test test.org.apache.spark.sql.JavaBeanDeserializationSuite.testBeanWithLocalDateAndInstant started
[info] Test test.org.apache.spark.sql.JavaBeanDeserializationSuite.testBeanWithMapFieldsDeserialization started
[info] Test run finished: 0 failed, 0 ignored, 5 total, 0.597s
[info] Test run started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testCollectAndTake started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testJsonRDDToDataFrame started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testVarargMethods started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testBeanWithoutGetter started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testCreateStructTypeFromList started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testSampleBy started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testCrosstab started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testUDF started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testCreateDataFromFromList started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testCircularReferenceBean started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testFrequentItems started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testSampleByColumn started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testExecution started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testTextLoad started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.pivot started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testGenericLoad started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testCountMinSketch started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.pivotColumnValues started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testCreateDataFrameFromJavaBeans started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testCorrelation started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testBloomFilter started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testCovariance started
[info] Test test.org.apache.spark.sql.JavaDataFrameSuite.testCreateDataFrameFromLocalJavaBeans started
[info] Test run finished: 0 failed, 0 ignored, 23 total, 9.663s
[info] Test run started
[info] Test test.org.apache.spark.sql.JavaDataFrameWriterV2Suite.testOverwritePartitionsAPI started
14:13:34.524 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored:
  
14:13:34.559 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored:
  
[info] Test test.org.apache.spark.sql.JavaDataFrameWriterV2Suite.testReplaceAPI started
14:13:34.708 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored:
  
14:13:34.752 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored:
  
14:13:34.793 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored:
  
14:13:34.834 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored:
  
14:13:34.876 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored:
  
[info] Test test.org.apache.spark.sql.JavaDataFrameWriterV2Suite.testAppendAPI started
14:13:34.989 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored:
  
14:13:35.028 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored:
  
[info] Test test.org.apache.spark.sql.JavaDataFrameWriterV2Suite.testCreateOrReplaceAPI started
14:13:35.140 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored:
  
14:13:35.179 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored:
  
14:13:35.213 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored:
  
14:13:35.243 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored:
  
14:13:35.274 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored:
  
[info] Test test.org.apache.spark.sql.JavaDataFrameWriterV2Suite.testOverwriteAPI started
14:13:35.380 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored:
  
14:13:35.427 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored:
  
[info] Test test.org.apache.spark.sql.JavaDataFrameWriterV2Suite.testCreateAPI started
14:13:35.546 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored:
  
14:13:35.606 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored:
  
14:13:35.658 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored:
  
14:13:35.707 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored:
  
14:13:35.755 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored:
  
[info] Test run finished: 0 failed, 0 ignored, 6 total, 1.347s
[info] Test run started
[info] Test test.org.apache.spark.sql.JavaRowSuite.constructSimpleRow started
[info] Test test.org.apache.spark.sql.JavaRowSuite.constructComplexRow started
[info] Test run finished: 0 failed, 0 ignored, 2 total, 0.002s
[info] ScalaTest
[info] Run completed in 1 hour, 44 minutes, 5 seconds.
[info] Total number of tests run: 8667
[info] Suites: completed 367, aborted 0
[info] Tests: succeeded 8667, failed 0, canceled 1, ignored 52, pending 0
[info] All tests passed.
[info] Passed: Total 8792, Failed 0, Errors 0, Passed 8792, Ignored 52, Canceled 1
[warn] 5.188 seconds of the last 10 seconds were spent in garbage collection. You may want to increase the project heap size using `-Xmx` or try a different gc algorithm, e.g. `-XX:+UseG1GC`, for better performance.
[success] Total time: 6255 s (01:44:15), completed Oct 28, 2020 2:13:44 PM

========================================================================
Running PySpark tests
========================================================================
Running PySpark tests. Output is in /home/jenkins/workspace/SparkPullRequestBuilder/python/unit-tests.log
Will test against the following Python executables: ['python3.6', 'pypy3']
Will test the following Python modules: ['pyspark-sql', 'pyspark-mllib', 'pyspark-ml']
python3.6 python_implementation is CPython
python3.6 version is: Python 3.6.8 :: Anaconda, Inc.
pypy3 python_implementation is PyPy
pypy3 version is: Python 3.6.9 (5da45ced70e515f94686be0df47c59abd1348ebc, Oct 17 2019, 22:59:56)
[PyPy 7.2.0 with GCC 8.2.0]
Starting test(pypy3): pyspark.sql.tests.test_arrow
Starting test(pypy3): pyspark.sql.tests.test_column
Starting test(pypy3): pyspark.sql.tests.test_dataframe
Starting test(pypy3): pyspark.sql.tests.test_catalog
Starting test(pypy3): pyspark.sql.tests.test_conf
Starting test(pypy3): pyspark.sql.tests.test_context
Starting test(pypy3): pyspark.sql.tests.test_datasources
Starting test(pypy3): pyspark.sql.tests.test_functions
Finished test(pypy3): pyspark.sql.tests.test_arrow (1s) ... 61 tests were skipped
Starting test(pypy3): pyspark.sql.tests.test_group
Finished test(pypy3): pyspark.sql.tests.test_conf (11s)
Starting test(pypy3): pyspark.sql.tests.test_pandas_cogrouped_map
Finished test(pypy3): pyspark.sql.tests.test_pandas_cogrouped_map (1s) ... 15 tests were skipped
Starting test(pypy3): pyspark.sql.tests.test_pandas_grouped_map
Finished test(pypy3): pyspark.sql.tests.test_pandas_grouped_map (1s) ... 21 tests were skipped
Starting test(pypy3): pyspark.sql.tests.test_pandas_map
Finished test(pypy3): pyspark.sql.tests.test_pandas_map (1s) ... 6 tests were skipped
Starting test(pypy3): pyspark.sql.tests.test_pandas_udf
Finished test(pypy3): pyspark.sql.tests.test_pandas_udf (1s) ... 6 tests were skipped
Starting test(pypy3): pyspark.sql.tests.test_pandas_udf_grouped_agg
Finished test(pypy3): pyspark.sql.tests.test_pandas_udf_grouped_agg (1s) ... 16 tests were skipped
Starting test(pypy3): pyspark.sql.tests.test_pandas_udf_scalar
Finished test(pypy3): pyspark.sql.tests.test_pandas_udf_scalar (1s) ... 50 tests were skipped
Starting test(pypy3): pyspark.sql.tests.test_pandas_udf_typehints
Finished test(pypy3): pyspark.sql.tests.test_pandas_udf_typehints (1s) ... 10 tests were skipped
Starting test(pypy3): pyspark.sql.tests.test_pandas_udf_window
Finished test(pypy3): pyspark.sql.tests.test_catalog (22s)
Starting test(pypy3): pyspark.sql.tests.test_readwriter
Finished test(pypy3): pyspark.sql.tests.test_pandas_udf_window (1s) ... 14 tests were skipped
Starting test(pypy3): pyspark.sql.tests.test_serde
Finished test(pypy3): pyspark.sql.tests.test_group (21s)
Starting test(pypy3): pyspark.sql.tests.test_session
Finished test(pypy3): pyspark.sql.tests.test_column (25s)
Starting test(pypy3): pyspark.sql.tests.test_streaming
Finished test(pypy3): pyspark.sql.tests.test_datasources (25s)
Starting test(pypy3): pyspark.sql.tests.test_types
Finished test(pypy3): pyspark.sql.tests.test_context (25s)
Starting test(pypy3): pyspark.sql.tests.test_udf
Finished test(pypy3): pyspark.sql.tests.test_functions (46s)
Starting test(pypy3): pyspark.sql.tests.test_utils
Finished test(pypy3): pyspark.sql.tests.test_serde (27s)
Starting test(python3.6): pyspark.ml.tests.test_algorithms
Finished test(pypy3): pyspark.sql.tests.test_dataframe (49s) ... 10 tests were skipped
Starting test(python3.6): pyspark.ml.tests.test_base
Finished test(pypy3): pyspark.sql.tests.test_readwriter (34s)
Starting test(python3.6): pyspark.ml.tests.test_evaluation
Finished test(pypy3): pyspark.sql.tests.test_utils (12s)
Starting test(python3.6): pyspark.ml.tests.test_feature
Finished test(pypy3): pyspark.sql.tests.test_session (37s)
Starting test(python3.6): pyspark.ml.tests.test_image
Finished test(pypy3): pyspark.sql.tests.test_streaming (37s)
Starting test(python3.6): pyspark.ml.tests.test_linalg
Finished test(python3.6): pyspark.ml.tests.test_base (15s)
Starting test(python3.6): pyspark.ml.tests.test_param
Finished test(python3.6): pyspark.ml.tests.test_evaluation (19s)
Starting test(python3.6): pyspark.ml.tests.test_persistence
Finished test(python3.6): pyspark.ml.tests.test_image (15s)
Starting test(python3.6): pyspark.ml.tests.test_pipeline
Finished test(pypy3): pyspark.sql.tests.test_types (54s)
Starting test(python3.6): pyspark.ml.tests.test_stat
Finished test(python3.6): pyspark.ml.tests.test_pipeline (6s)
Starting test(python3.6): pyspark.ml.tests.test_training_summary
Finished test(pypy3): pyspark.sql.tests.test_udf (60s)
Starting test(python3.6): pyspark.ml.tests.test_tuning
Finished test(python3.6): pyspark.ml.tests.test_param (22s)
Starting test(python3.6): pyspark.ml.tests.test_wrapper
Finished test(python3.6): pyspark.ml.tests.test_feature (32s)
Starting test(python3.6): pyspark.mllib.tests.test_algorithms
Finished test(python3.6): pyspark.ml.tests.test_stat (15s)
Starting test(python3.6): pyspark.mllib.tests.test_feature
Finished test(python3.6): pyspark.ml.tests.test_linalg (34s)
Starting test(python3.6): pyspark.mllib.tests.test_linalg
Finished test(python3.6): pyspark.ml.tests.test_wrapper (18s)
Starting test(python3.6): pyspark.mllib.tests.test_stat
Finished test(python3.6): pyspark.ml.tests.test_training_summary (42s)
Starting test(python3.6): pyspark.mllib.tests.test_streaming_algorithms
Finished test(python3.6): pyspark.mllib.tests.test_feature (34s)
Starting test(python3.6): pyspark.mllib.tests.test_util
Finished test(python3.6): pyspark.mllib.tests.test_stat (28s)
Starting test(python3.6): pyspark.sql.tests.test_arrow
Finished test(python3.6): pyspark.sql.tests.test_arrow (0s) ... 61 tests were skipped
Starting test(python3.6): pyspark.sql.tests.test_catalog
Finished test(python3.6): pyspark.ml.tests.test_persistence (61s)
Starting test(python3.6): pyspark.sql.tests.test_column
Finished test(python3.6): pyspark.mllib.tests.test_util (12s)
Starting test(python3.6): pyspark.sql.tests.test_conf
Finished test(python3.6): pyspark.ml.tests.test_algorithms (92s)
Starting test(python3.6): pyspark.sql.tests.test_context
Finished test(python3.6): pyspark.sql.tests.test_conf (10s)
Starting test(python3.6): pyspark.sql.tests.test_dataframe
Finished test(python3.6): pyspark.sql.tests.test_catalog (17s)
Starting test(python3.6): pyspark.sql.tests.test_datasources
Finished test(python3.6): pyspark.sql.tests.test_column (18s)
Starting test(python3.6): pyspark.sql.tests.test_functions
Finished test(python3.6): pyspark.sql.tests.test_context (20s)
Starting test(python3.6): pyspark.sql.tests.test_group
Finished test(python3.6): pyspark.mllib.tests.test_algorithms (71s)
Starting test(python3.6): pyspark.sql.tests.test_pandas_cogrouped_map
Finished test(python3.6): pyspark.mllib.tests.test_linalg (66s)
Starting test(python3.6): pyspark.sql.tests.test_pandas_grouped_map
Finished test(python3.6): pyspark.sql.tests.test_pandas_cogrouped_map (0s) ... 15 tests were skipped
Starting test(python3.6): pyspark.sql.tests.test_pandas_map
Finished test(python3.6): pyspark.sql.tests.test_pandas_grouped_map (0s) ... 21 tests were skipped
Starting test(python3.6): pyspark.sql.tests.test_pandas_udf
Finished test(python3.6): pyspark.sql.tests.test_pandas_map (0s) ... 6 tests were skipped
Starting test(python3.6): pyspark.sql.tests.test_pandas_udf_grouped_agg
Finished test(python3.6): pyspark.sql.tests.test_pandas_udf (0s) ... 6 tests were skipped
Starting test(python3.6): pyspark.sql.tests.test_pandas_udf_scalar
Finished test(python3.6): pyspark.sql.tests.test_pandas_udf_grouped_agg (0s) ... 16 tests were skipped
Starting test(python3.6): pyspark.sql.tests.test_pandas_udf_typehints
Finished test(python3.6): pyspark.sql.tests.test_pandas_udf_scalar (0s) ... 50 tests were skipped
Starting test(python3.6): pyspark.sql.tests.test_pandas_udf_window
Finished test(python3.6): pyspark.sql.tests.test_pandas_udf_typehints (0s) ... 10 tests were skipped
Starting test(python3.6): pyspark.sql.tests.test_readwriter
Finished test(python3.6): pyspark.sql.tests.test_pandas_udf_window (0s) ... 14 tests were skipped
Starting test(python3.6): pyspark.sql.tests.test_serde
Finished test(python3.6): pyspark.sql.tests.test_datasources (22s)
Starting test(python3.6): pyspark.sql.tests.test_session
Finished test(python3.6): pyspark.sql.tests.test_group (18s)
Starting test(python3.6): pyspark.sql.tests.test_streaming
Finished test(python3.6): pyspark.sql.tests.test_serde (24s)
Starting test(python3.6): pyspark.sql.tests.test_types
Finished test(python3.6): pyspark.sql.tests.test_functions (39s)
Starting test(python3.6): pyspark.sql.tests.test_udf
Finished test(python3.6): pyspark.sql.tests.test_readwriter (31s)
Starting test(python3.6): pyspark.sql.tests.test_utils
Finished test(python3.6): pyspark.sql.tests.test_dataframe (48s) ... 3 tests were skipped
Starting test(pypy3): pyspark.sql.avro.functions
Finished test(python3.6): pyspark.sql.tests.test_utils (11s)
Starting test(pypy3): pyspark.sql.catalog
Finished test(python3.6): pyspark.sql.tests.test_session (35s)
Starting test(pypy3): pyspark.sql.column
Finished test(python3.6): pyspark.sql.tests.test_streaming (36s)
Starting test(pypy3): pyspark.sql.conf
Finished test(pypy3): pyspark.sql.avro.functions (18s)
Starting test(pypy3): pyspark.sql.context
Finished test(pypy3): pyspark.sql.conf (8s)
Starting test(pypy3): pyspark.sql.dataframe
Finished test(pypy3): pyspark.sql.catalog (17s)
Starting test(pypy3): pyspark.sql.functions
Finished test(python3.6): pyspark.sql.tests.test_types (48s)
Starting test(pypy3): pyspark.sql.group
Finished test(pypy3): pyspark.sql.context (24s)
Starting test(pypy3): pyspark.sql.pandas.conversion
Finished test(pypy3): pyspark.sql.column (33s)
Starting test(pypy3): pyspark.sql.pandas.group_ops
Finished test(python3.6): pyspark.sql.tests.test_udf (55s)
Starting test(pypy3): pyspark.sql.pandas.map_ops
Finished test(pypy3): pyspark.sql.pandas.conversion (8s)
Starting test(pypy3): pyspark.sql.pandas.serializers
Finished test(pypy3): pyspark.sql.pandas.serializers (0s)
Starting test(pypy3): pyspark.sql.pandas.typehints
Finished test(pypy3): pyspark.sql.pandas.typehints (0s)
Starting test(pypy3): pyspark.sql.pandas.types
Finished test(pypy3): pyspark.sql.pandas.types (0s)
Starting test(pypy3): pyspark.sql.pandas.utils
Finished test(pypy3): pyspark.sql.pandas.utils (1s)
Starting test(pypy3): pyspark.sql.readwriter
Finished test(pypy3): pyspark.sql.pandas.group_ops (12s)
Starting test(pypy3): pyspark.sql.session
Finished test(pypy3): pyspark.sql.pandas.map_ops (12s)
Starting test(pypy3): pyspark.sql.streaming
Finished test(pypy3): pyspark.sql.group (29s)
Starting test(pypy3): pyspark.sql.types
Finished test(pypy3): pyspark.sql.types (9s)
Starting test(pypy3): pyspark.sql.udf
Finished test(pypy3): pyspark.sql.session (24s)
Starting test(pypy3): pyspark.sql.window
Finished test(pypy3): pyspark.sql.readwriter (26s)
Starting test(python3.6): pyspark.ml.classification
Finished test(pypy3): pyspark.sql.streaming (17s)
Starting test(python3.6): pyspark.ml.clustering
Finished test(python3.6): pyspark.mllib.tests.test_streaming_algorithms (156s)
Starting test(python3.6): pyspark.ml.evaluation
Finished test(pypy3): pyspark.sql.dataframe (64s)
Starting test(python3.6): pyspark.ml.feature
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).

[Stage 0:>                                                          (0 + 4) / 4]

                                                                                

[Stage 5:>                                                          (0 + 4) / 4]

                                                                                

[Stage 47:====================================>                 (137 + 6) / 200]

                                                                                

[Stage 50:==============================================>       (173 + 4) / 200]

                                                                                
**********************************************************************
File "/home/jenkins/workspace/SparkPullRequestBuilder/python/pyspark/sql/functions.py", line 2939, in pyspark.sql.functions.schema_of_json
Failed example:
    df.select(schema_of_json(lit('{"a": 0}')).alias("json")).collect()
Expected:
    [Row(json='struct<a:bigint>')]
Got:
    [Row(json='STRUCT<`a`: BIGINT>')]
**********************************************************************
File "/home/jenkins/workspace/SparkPullRequestBuilder/python/pyspark/sql/functions.py", line 2942, in pyspark.sql.functions.schema_of_json
Failed example:
    df.select(schema.alias("json")).collect()
Expected:
    [Row(json='struct<a:bigint>')]
Got:
    [Row(json='STRUCT<`a`: BIGINT>')]
**********************************************************************
   2 of   4 in pyspark.sql.functions.schema_of_json
***Test Failed*** 2 failures.

Had test failures in pyspark.sql.functions with pypy3; see logs.
[error] running /home/jenkins/workspace/SparkPullRequestBuilder/python/run-tests --modules=pyspark-sql,pyspark-mllib,pyspark-ml --parallelism=8 ; received return code 255
Attempting to post to Github...
 > Post successful.
Process leaked file descriptors. See http://wiki.jenkins-ci.org/display/JENKINS/Spawning+processes+from+build for more information
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Test FAILed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/130377/
Test FAILed.
Finished: FAILURE