SuccessConsole Output

Skipping 11,690 KB.. Full Log
nonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1356)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
	... 8 more
	Suppressed: java.lang.RuntimeException: Intentional task commitment failure for testing purpose.
		at scala.sys.package$.error(package.scala:27)
		at org.apache.spark.sql.sources.CommitFailureTestSource$$anon$1$$anon$2.close(CommitFailureTestSource.scala:62)
		at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:252)
		at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$1.apply$mcV$sp(FileFormatWriter.scala:196)
		at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1365)
		... 9 more
Caused by: java.lang.ArithmeticException: / by zero
	at org.apache.spark.sql.sources.CommitFailureTestRelationSuite$$anonfun$2$$anonfun$apply$mcV$sp$2$$anonfun$3.apply$mcII$sp(CommitFailureTestRelationSuite.scala:51)
	at org.apache.spark.sql.sources.CommitFailureTestRelationSuite$$anonfun$2$$anonfun$apply$mcV$sp$2$$anonfun$3.apply(CommitFailureTestRelationSuite.scala:51)
	at org.apache.spark.sql.sources.CommitFailureTestRelationSuite$$anonfun$2$$anonfun$apply$mcV$sp$2$$anonfun$3.apply(CommitFailureTestRelationSuite.scala:51)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(generated.java:48)
	... 15 more
18:19:16.775 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 14655.0 (TID 35362, localhost, executor driver): org.apache.spark.SparkException: Task failed while writing rows
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:204)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
	at org.apache.spark.scheduler.Task.run(Task.scala:100)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.SparkException: Failed to execute user defined function($anonfun$3: (int) => int)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(generated.java:50)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:377)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:243)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:190)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1356)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
	... 8 more
	Suppressed: java.lang.RuntimeException: Intentional task commitment failure for testing purpose.
		at scala.sys.package$.error(package.scala:27)
		at org.apache.spark.sql.sources.CommitFailureTestSource$$anon$1$$anon$2.close(CommitFailureTestSource.scala:62)
		at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:252)
		at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$1.apply$mcV$sp(FileFormatWriter.scala:196)
		at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1365)
		... 9 more
Caused by: java.lang.ArithmeticException: / by zero
	at org.apache.spark.sql.sources.CommitFailureTestRelationSuite$$anonfun$2$$anonfun$apply$mcV$sp$2$$anonfun$3.apply$mcII$sp(CommitFailureTestRelationSuite.scala:51)
	at org.apache.spark.sql.sources.CommitFailureTestRelationSuite$$anonfun$2$$anonfun$apply$mcV$sp$2$$anonfun$3.apply(CommitFailureTestRelationSuite.scala:51)
	at org.apache.spark.sql.sources.CommitFailureTestRelationSuite$$anonfun$2$$anonfun$apply$mcV$sp$2$$anonfun$3.apply(CommitFailureTestRelationSuite.scala:51)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(generated.java:48)
	... 15 more

18:19:16.775 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 14655.0 failed 1 times; aborting job
18:19:16.778 ERROR org.apache.spark.sql.execution.datasources.FileFormatWriter: Aborting job null.
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 14655.0 failed 1 times, most recent failure: Lost task 0.0 in stage 14655.0 (TID 35362, localhost, executor driver): org.apache.spark.SparkException: Task failed while writing rows
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:204)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
	at org.apache.spark.scheduler.Task.run(Task.scala:100)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.SparkException: Failed to execute user defined function($anonfun$3: (int) => int)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(generated.java:50)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:377)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:243)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:190)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1356)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
	... 8 more
	Suppressed: java.lang.RuntimeException: Intentional task commitment failure for testing purpose.
		at scala.sys.package$.error(package.scala:27)
		at org.apache.spark.sql.sources.CommitFailureTestSource$$anon$1$$anon$2.close(CommitFailureTestSource.scala:62)
		at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:252)
		at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$1.apply$mcV$sp(FileFormatWriter.scala:196)
		at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1365)
		... 9 more
Caused by: java.lang.ArithmeticException: / by zero
	at org.apache.spark.sql.sources.CommitFailureTestRelationSuite$$anonfun$2$$anonfun$apply$mcV$sp$2$$anonfun$3.apply$mcII$sp(CommitFailureTestRelationSuite.scala:51)
	at org.apache.spark.sql.sources.CommitFailureTestRelationSuite$$anonfun$2$$anonfun$apply$mcV$sp$2$$anonfun$3.apply(CommitFailureTestRelationSuite.scala:51)
	at org.apache.spark.sql.sources.CommitFailureTestRelationSuite$$anonfun$2$$anonfun$apply$mcV$sp$2$$anonfun$3.apply(CommitFailureTestRelationSuite.scala:51)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(generated.java:48)
	... 15 more

Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1455)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1443)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1442)
	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1442)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
	at scala.Option.foreach(Option.scala:257)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1670)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1625)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1614)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1928)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1941)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1961)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply$mcV$sp(FileFormatWriter.scala:127)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:121)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:121)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:121)
	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:101)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:92)
	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:92)
	at org.apache.spark.sql.execution.datasources.DataSource.writeInFileFormat(DataSource.scala:484)
	at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:520)
	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:215)
	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:198)
	at org.apache.spark.sql.sources.CommitFailureTestRelationSuite$$anonfun$2$$anonfun$apply$mcV$sp$2$$anonfun$apply$2.apply$mcV$sp(CommitFailureTestRelationSuite.scala:56)
	at org.apache.spark.sql.sources.CommitFailureTestRelationSuite$$anonfun$2$$anonfun$apply$mcV$sp$2$$anonfun$apply$2.apply(CommitFailureTestRelationSuite.scala:56)
	at org.apache.spark.sql.sources.CommitFailureTestRelationSuite$$anonfun$2$$anonfun$apply$mcV$sp$2$$anonfun$apply$2.apply(CommitFailureTestRelationSuite.scala:56)
	at org.scalatest.Assertions$class.intercept(Assertions.scala:997)
	at org.scalatest.FunSuite.intercept(FunSuite.scala:1555)
	at org.apache.spark.sql.sources.CommitFailureTestRelationSuite$$anonfun$2$$anonfun$apply$mcV$sp$2.apply(CommitFailureTestRelationSuite.scala:55)
	at org.apache.spark.sql.sources.CommitFailureTestRelationSuite$$anonfun$2$$anonfun$apply$mcV$sp$2.apply(CommitFailureTestRelationSuite.scala:49)
	at org.apache.spark.sql.test.SQLTestUtils$class.withTempPath(SQLTestUtils.scala:124)
	at org.apache.spark.sql.sources.CommitFailureTestRelationSuite.withTempPath(CommitFailureTestRelationSuite.scala:28)
	at org.apache.spark.sql.sources.CommitFailureTestRelationSuite$$anonfun$2.apply$mcV$sp(CommitFailureTestRelationSuite.scala:49)
	at org.apache.spark.sql.sources.CommitFailureTestRelationSuite$$anonfun$2.apply(CommitFailureTestRelationSuite.scala:47)
	at org.apache.spark.sql.sources.CommitFailureTestRelationSuite$$anonfun$2.apply(CommitFailureTestRelationSuite.scala:47)
	at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
	at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:68)
	at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
	at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
	at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
	at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
	at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
	at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
	at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
	at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
	at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
	at scala.collection.immutable.List.foreach(List.scala:381)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
	at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
	at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
	at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
	at org.scalatest.Suite$class.run(Suite.scala:1424)
	at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
	at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
	at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
	at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:31)
	at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
	at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:31)
	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:357)
	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:502)
	at sbt.ForkMain$Run$2.call(ForkMain.java:296)
	at sbt.ForkMain$Run$2.call(ForkMain.java:286)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.SparkException: Task failed while writing rows
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:204)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
	at org.apache.spark.scheduler.Task.run(Task.scala:100)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)
	... 3 more
Caused by: org.apache.spark.SparkException: Failed to execute user defined function($anonfun$3: (int) => int)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(generated.java:50)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:377)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:243)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:190)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1356)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
	... 8 more
	Suppressed: java.lang.RuntimeException: Intentional task commitment failure for testing purpose.
		at scala.sys.package$.error(package.scala:27)
		at org.apache.spark.sql.sources.CommitFailureTestSource$$anon$1$$anon$2.close(CommitFailureTestSource.scala:62)
		at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:252)
		at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$1.apply$mcV$sp(FileFormatWriter.scala:196)
		at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1365)
		... 9 more
Caused by: java.lang.ArithmeticException: / by zero
	at org.apache.spark.sql.sources.CommitFailureTestRelationSuite$$anonfun$2$$anonfun$apply$mcV$sp$2$$anonfun$3.apply$mcII$sp(CommitFailureTestRelationSuite.scala:51)
	at org.apache.spark.sql.sources.CommitFailureTestRelationSuite$$anonfun$2$$anonfun$apply$mcV$sp$2$$anonfun$3.apply(CommitFailureTestRelationSuite.scala:51)
	at org.apache.spark.sql.sources.CommitFailureTestRelationSuite$$anonfun$2$$anonfun$apply$mcV$sp$2$$anonfun$3.apply(CommitFailureTestRelationSuite.scala:51)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(generated.java:48)
	... 15 more
[info] - call failure callbacks before close writer - default (92 milliseconds)
18:19:16.881 ERROR org.apache.spark.util.Utils: Aborting task
java.lang.RuntimeException: Intentional task writer failure for testing purpose.
	at scala.sys.package$.error(package.scala:27)
	at org.apache.spark.sql.sources.CommitFailureTestSource$$anon$1$$anon$2.write(CommitFailureTestSource.scala:54)
	at org.apache.spark.sql.execution.datasources.OutputWriter.writeInternal(OutputWriter.scala:93)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$DynamicPartitionWriteTask.execute(FileFormatWriter.scala:397)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:190)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1356)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
	at org.apache.spark.scheduler.Task.run(Task.scala:100)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
18:19:16.882 ERROR org.apache.spark.sql.execution.datasources.FileFormatWriter: Job job_20180809181916_14656 aborted.
18:19:16.882 WARN org.apache.spark.util.Utils: Suppressing exception in catch: Intentional task commitment failure for testing purpose.
java.lang.RuntimeException: Intentional task commitment failure for testing purpose.
	at scala.sys.package$.error(package.scala:27)
	at org.apache.spark.sql.sources.CommitFailureTestSource$$anon$1$$anon$2.close(CommitFailureTestSource.scala:62)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$DynamicPartitionWriteTask.releaseResources(FileFormatWriter.scala:408)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$1.apply$mcV$sp(FileFormatWriter.scala:196)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1365)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
	at org.apache.spark.scheduler.Task.run(Task.scala:100)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
18:19:16.890 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 14656.0 (TID 35363)
org.apache.spark.SparkException: Task failed while writing rows
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:204)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
	at org.apache.spark.scheduler.Task.run(Task.scala:100)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Intentional task writer failure for testing purpose.
	at scala.sys.package$.error(package.scala:27)
	at org.apache.spark.sql.sources.CommitFailureTestSource$$anon$1$$anon$2.write(CommitFailureTestSource.scala:54)
	at org.apache.spark.sql.execution.datasources.OutputWriter.writeInternal(OutputWriter.scala:93)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$DynamicPartitionWriteTask.execute(FileFormatWriter.scala:397)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:190)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1356)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
	... 8 more
	Suppressed: java.lang.RuntimeException: Intentional task commitment failure for testing purpose.
		at scala.sys.package$.error(package.scala:27)
		at org.apache.spark.sql.sources.CommitFailureTestSource$$anon$1$$anon$2.close(CommitFailureTestSource.scala:62)
		at org.apache.spark.sql.execution.datasources.FileFormatWriter$DynamicPartitionWriteTask.releaseResources(FileFormatWriter.scala:408)
		at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$1.apply$mcV$sp(FileFormatWriter.scala:196)
		at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1365)
		... 9 more
18:19:16.891 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 14656.0 (TID 35363, localhost, executor driver): org.apache.spark.SparkException: Task failed while writing rows
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:204)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
	at org.apache.spark.scheduler.Task.run(Task.scala:100)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Intentional task writer failure for testing purpose.
	at scala.sys.package$.error(package.scala:27)
	at org.apache.spark.sql.sources.CommitFailureTestSource$$anon$1$$anon$2.write(CommitFailureTestSource.scala:54)
	at org.apache.spark.sql.execution.datasources.OutputWriter.writeInternal(OutputWriter.scala:93)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$DynamicPartitionWriteTask.execute(FileFormatWriter.scala:397)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:190)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1356)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
	... 8 more
	Suppressed: java.lang.RuntimeException: Intentional task commitment failure for testing purpose.
		at scala.sys.package$.error(package.scala:27)
		at org.apache.spark.sql.sources.CommitFailureTestSource$$anon$1$$anon$2.close(CommitFailureTestSource.scala:62)
		at org.apache.spark.sql.execution.datasources.FileFormatWriter$DynamicPartitionWriteTask.releaseResources(FileFormatWriter.scala:408)
		at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$1.apply$mcV$sp(FileFormatWriter.scala:196)
		at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1365)
		... 9 more

18:19:16.891 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 14656.0 failed 1 times; aborting job
18:19:16.893 ERROR org.apache.spark.sql.execution.datasources.FileFormatWriter: Aborting job null.
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 14656.0 failed 1 times, most recent failure: Lost task 0.0 in stage 14656.0 (TID 35363, localhost, executor driver): org.apache.spark.SparkException: Task failed while writing rows
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:204)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
	at org.apache.spark.scheduler.Task.run(Task.scala:100)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Intentional task writer failure for testing purpose.
	at scala.sys.package$.error(package.scala:27)
	at org.apache.spark.sql.sources.CommitFailureTestSource$$anon$1$$anon$2.write(CommitFailureTestSource.scala:54)
	at org.apache.spark.sql.execution.datasources.OutputWriter.writeInternal(OutputWriter.scala:93)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$DynamicPartitionWriteTask.execute(FileFormatWriter.scala:397)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:190)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1356)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
	... 8 more
	Suppressed: java.lang.RuntimeException: Intentional task commitment failure for testing purpose.
		at scala.sys.package$.error(package.scala:27)
		at org.apache.spark.sql.sources.CommitFailureTestSource$$anon$1$$anon$2.close(CommitFailureTestSource.scala:62)
		at org.apache.spark.sql.execution.datasources.FileFormatWriter$DynamicPartitionWriteTask.releaseResources(FileFormatWriter.scala:408)
		at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$1.apply$mcV$sp(FileFormatWriter.scala:196)
		at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1365)
		... 9 more

Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1455)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1443)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1442)
	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1442)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
	at scala.Option.foreach(Option.scala:257)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1670)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1625)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1614)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1928)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1941)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1961)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply$mcV$sp(FileFormatWriter.scala:127)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:121)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:121)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:121)
	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:101)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:92)
	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:92)
	at org.apache.spark.sql.execution.datasources.DataSource.writeInFileFormat(DataSource.scala:484)
	at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:520)
	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:215)
	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:198)
	at org.apache.spark.sql.sources.CommitFailureTestRelationSuite$$anonfun$4$$anonfun$apply$mcV$sp$3$$anonfun$apply$3.apply$mcV$sp(CommitFailureTestRelationSuite.scala:74)
	at org.apache.spark.sql.sources.CommitFailureTestRelationSuite$$anonfun$4$$anonfun$apply$mcV$sp$3$$anonfun$apply$3.apply(CommitFailureTestRelationSuite.scala:74)
	at org.apache.spark.sql.sources.CommitFailureTestRelationSuite$$anonfun$4$$anonfun$apply$mcV$sp$3$$anonfun$apply$3.apply(CommitFailureTestRelationSuite.scala:74)
	at org.scalatest.Assertions$class.intercept(Assertions.scala:997)
	at org.scalatest.FunSuite.intercept(FunSuite.scala:1555)
	at org.apache.spark.sql.sources.CommitFailureTestRelationSuite$$anonfun$4$$anonfun$apply$mcV$sp$3.apply(CommitFailureTestRelationSuite.scala:73)
	at org.apache.spark.sql.sources.CommitFailureTestRelationSuite$$anonfun$4$$anonfun$apply$mcV$sp$3.apply(CommitFailureTestRelationSuite.scala:67)
	at org.apache.spark.sql.test.SQLTestUtils$class.withTempPath(SQLTestUtils.scala:124)
	at org.apache.spark.sql.sources.CommitFailureTestRelationSuite.withTempPath(CommitFailureTestRelationSuite.scala:28)
	at org.apache.spark.sql.sources.CommitFailureTestRelationSuite$$anonfun$4.apply$mcV$sp(CommitFailureTestRelationSuite.scala:67)
	at org.apache.spark.sql.sources.CommitFailureTestRelationSuite$$anonfun$4.apply(CommitFailureTestRelationSuite.scala:65)
	at org.apache.spark.sql.sources.CommitFailureTestRelationSuite$$anonfun$4.apply(CommitFailureTestRelationSuite.scala:65)
	at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
	at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:68)
	at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
	at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
	at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
	at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
	at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
	at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
	at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
	at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
	at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
	at scala.collection.immutable.List.foreach(List.scala:381)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
	at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
	at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
	at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
	at org.scalatest.Suite$class.run(Suite.scala:1424)
	at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
	at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
	at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
	at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:31)
	at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
	at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:31)
	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:357)
	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:502)
	at sbt.ForkMain$Run$2.call(ForkMain.java:296)
	at sbt.ForkMain$Run$2.call(ForkMain.java:286)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.SparkException: Task failed while writing rows
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:204)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
	at org.apache.spark.scheduler.Task.run(Task.scala:100)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)
	... 3 more
Caused by: java.lang.RuntimeException: Intentional task writer failure for testing purpose.
	at scala.sys.package$.error(package.scala:27)
	at org.apache.spark.sql.sources.CommitFailureTestSource$$anon$1$$anon$2.write(CommitFailureTestSource.scala:54)
	at org.apache.spark.sql.execution.datasources.OutputWriter.writeInternal(OutputWriter.scala:93)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$DynamicPartitionWriteTask.execute(FileFormatWriter.scala:397)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:190)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1356)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
	... 8 more
	Suppressed: java.lang.RuntimeException: Intentional task commitment failure for testing purpose.
		at scala.sys.package$.error(package.scala:27)
		at org.apache.spark.sql.sources.CommitFailureTestSource$$anon$1$$anon$2.close(CommitFailureTestSource.scala:62)
		at org.apache.spark.sql.execution.datasources.FileFormatWriter$DynamicPartitionWriteTask.releaseResources(FileFormatWriter.scala:408)
		at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$1.apply$mcV$sp(FileFormatWriter.scala:196)
		at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1365)
		... 9 more
[info] - call failure callbacks before close writer - partitioned (115 milliseconds)
[info] HiveUDFSuite:
18:19:17.006 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2/target/tmp/warehouse-3f0f4742-5189-4560-a6b5-6ae95c964952/src specified for non-external table:src
[info] - spark sql udf test that returns a struct (252 milliseconds)
[info] - SPARK-4785 When called with arguments referring column fields, PMOD throws NPE (113 milliseconds)
18:19:17.318 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2/target/tmp/warehouse-3f0f4742-5189-4560-a6b5-6ae95c964952/hiveudftesttable specified for non-external table:hiveudftesttable
[info] - hive struct udf (68 milliseconds)
[info] - Max/Min on named_struct (619 milliseconds)
[info] - SPARK-6409 UDAF Average test (252 milliseconds)
18:19:18.256 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2/target/tmp/warehouse-3f0f4742-5189-4560-a6b5-6ae95c964952/src specified for non-external table:src
[info] - SPARK-2693 udaf aggregates test (676 milliseconds)
[info] - SPARK-16228 Percentile needs explicit cast to double (13 milliseconds)
[info] - Generic UDAF aggregates (2 seconds, 815 milliseconds)
[info] - UDFIntegerToString (216 milliseconds)
[info] - UDFToListString (56 milliseconds)
[info] - UDFToListInt (68 milliseconds)
[info] - UDFToStringIntMap (100 milliseconds)
[info] - UDFToIntIntMap (48 milliseconds)
[info] - UDFListListInt (140 milliseconds)
[info] - UDFListString (108 milliseconds)
[info] - UDFStringString (156 milliseconds)
[info] - UDFTwoListList (175 milliseconds)
[info] - non-deterministic children of UDF (16 milliseconds)
[info] - non-deterministic children expressions of UDAF (18 milliseconds)
[info] - Hive UDFs with insufficient number of input arguments should trigger an analysis error (39 milliseconds)
[info] - Hive UDF in group by (207 milliseconds)
18:19:23.122 WARN org.apache.spark.sql.hive.HiveExternalCatalog: The table schema given by Hive metastore(struct<page_id:string,impressions:string>) is different from the schema when this table was created by Spark SQL(struct<page_id:int,impressions:int>). We have to fall back to the table schema from Hive metastore which is not case preserving.
18:19:23.200 WARN org.apache.spark.sql.hive.HiveExternalCatalog: The table schema given by Hive metastore(struct<page_id:string,impressions:string>) is different from the schema when this table was created by Spark SQL(struct<page_id:int,impressions:int>). We have to fall back to the table schema from Hive metastore which is not case preserving.
18:19:23.781 WARN org.apache.spark.sql.hive.HiveExternalCatalog: The table schema given by Hive metastore(struct<page_id:string,impressions:string>) is different from the schema when this table was created by Spark SQL(struct<page_id:int,impressions:int>). We have to fall back to the table schema from Hive metastore which is not case preserving.
18:19:23.785 WARN org.apache.spark.sql.hive.HiveExternalCatalog: The table schema given by Hive metastore(struct<page_id:string,impressions:string>) is different from the schema when this table was created by Spark SQL(struct<page_id:int,impressions:int>). We have to fall back to the table schema from Hive metastore which is not case preserving.
18:19:24.356 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2/target/tmp/warehouse-3f0f4742-5189-4560-a6b5-6ae95c964952/parquet_tmp specified for non-external table:parquet_tmp
[info] - SPARK-11522 select input_file_name from non-parquet table (1 second, 532 milliseconds)
[info] - Hive Stateful UDF (423 milliseconds)
[info] Test run started
[info] Test org.apache.spark.sql.hive.JavaDataFrameSuite.testUDAF started
[info] Test org.apache.spark.sql.hive.JavaDataFrameSuite.saveTableAndQueryIt started
[info] Test run finished: 0 failed, 0 ignored, 2 total, 1.48s
[info] Test run started
[info] Test org.apache.spark.sql.hive.JavaMetastoreDataSourcesSuite.saveExternalTableAndQueryIt started
18:19:26.701 WARN org.apache.spark.sql.execution.datasources.PartitioningAwareFileIndex: The directory file:/home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2/target/tmp/datasource-5d8e09a8-2598-45aa-b6da-41235ccc2c34 was not found. Was it deleted very recently?
18:19:26.762 WARN org.apache.spark.sql.hive.HiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider org.apache.spark.sql.json. Persisting data source table `default`.`javasavedtable` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
18:19:26.932 WARN org.apache.spark.sql.hive.HiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider org.apache.spark.sql.json. Persisting data source table `default`.`externaltable` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] Test org.apache.spark.sql.hive.JavaMetastoreDataSourcesSuite.saveTableAndQueryIt started
18:19:27.195 WARN org.apache.spark.sql.execution.datasources.PartitioningAwareFileIndex: The directory file:/home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2/target/tmp/warehouse-3f0f4742-5189-4560-a6b5-6ae95c964952/javasavedtable was not found. Was it deleted very recently?
18:19:27.249 WARN org.apache.spark.sql.hive.HiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider org.apache.spark.sql.json. Persisting data source table `default`.`javasavedtable` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] Test org.apache.spark.sql.hive.JavaMetastoreDataSourcesSuite.saveExternalTableWithSchemaAndQueryIt started
18:19:27.371 WARN org.apache.spark.sql.execution.datasources.PartitioningAwareFileIndex: The directory file:/home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2/target/tmp/datasource-66a04ea9-1ec4-4117-9fae-a4a8e52b58db was not found. Was it deleted very recently?
18:19:27.427 WARN org.apache.spark.sql.hive.HiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider org.apache.spark.sql.json. Persisting data source table `default`.`javasavedtable` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
18:19:27.573 WARN org.apache.spark.sql.hive.HiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider org.apache.spark.sql.json. Persisting data source table `default`.`externaltable` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] Test run finished: 0 failed, 0 ignored, 3 total, 1.178s
[info] ScalaCheck
[info] Passed: Total 0, Failed 0, Errors 0, Passed 0
[info] Warning: Unknown ScalaCheck args provided: -oDF
[info] ScalaTest
[info] Run completed in 1 hour, 49 minutes, 17 seconds.
[info] Total number of tests run: 2311
[info] Suites: completed 72, aborted 0
[info] Tests: succeeded 2311, failed 0, canceled 0, ignored 593, pending 0
[info] All tests passed.
[info] Passed: Total 2316, Failed 0, Errors 0, Passed 2316, Ignored 593
[success] Total time: 7037 s, completed Aug 9, 2018 6:19:34 PM

========================================================================
Running PySpark tests
========================================================================
Running PySpark tests. Output is in /home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2/python/unit-tests.log
Will test against the following Python executables: ['python2.6', 'python3.4', 'pypy']
Will test the following Python modules: ['pyspark-core', 'pyspark-ml', 'pyspark-mllib', 'pyspark-sql', 'pyspark-streaming']
Starting test(pypy): pyspark.sql.tests
Starting test(pypy): pyspark.streaming.tests
Starting test(python2.6): pyspark.mllib.tests
Starting test(pypy): pyspark.tests
Finished test(pypy): pyspark.tests (175s)
Starting test(python2.6): pyspark.sql.tests
Finished test(python2.6): pyspark.mllib.tests (212s)
Starting test(python2.6): pyspark.streaming.tests
Finished test(pypy): pyspark.streaming.tests (274s)
Starting test(python2.6): pyspark.tests
Finished test(pypy): pyspark.sql.tests (280s)
Starting test(python3.4): pyspark.mllib.tests
Finished test(python2.6): pyspark.tests (152s)
Starting test(python3.4): pyspark.sql.tests
Finished test(python2.6): pyspark.sql.tests (290s)
Starting test(python3.4): pyspark.streaming.tests
Finished test(python2.6): pyspark.streaming.tests (275s)
Starting test(python3.4): pyspark.tests
Finished test(python3.4): pyspark.mllib.tests (233s)
Starting test(pypy): pyspark.accumulators
Finished test(pypy): pyspark.accumulators (5s)
Starting test(pypy): pyspark.broadcast
Finished test(pypy): pyspark.broadcast (5s)
Starting test(pypy): pyspark.conf
Finished test(pypy): pyspark.conf (3s)
Starting test(pypy): pyspark.context
Finished test(pypy): pyspark.context (17s)
Starting test(pypy): pyspark.profiler
Finished test(pypy): pyspark.profiler (7s)
Starting test(pypy): pyspark.rdd
Finished test(pypy): pyspark.rdd (32s)
Starting test(pypy): pyspark.serializers
Finished test(pypy): pyspark.serializers (9s)
Starting test(pypy): pyspark.shuffle
Finished test(pypy): pyspark.shuffle (2s)
Starting test(pypy): pyspark.sql.catalog
Finished test(pypy): pyspark.sql.catalog (11s)
Starting test(pypy): pyspark.sql.column
Finished test(pypy): pyspark.sql.column (12s)
Starting test(pypy): pyspark.sql.conf
Finished test(pypy): pyspark.sql.conf (5s)
Starting test(pypy): pyspark.sql.context
Finished test(pypy): pyspark.sql.context (15s)
Starting test(pypy): pyspark.sql.dataframe
Finished test(python3.4): pyspark.tests (163s)
Starting test(pypy): pyspark.sql.functions
Finished test(pypy): pyspark.sql.dataframe (32s)
Starting test(pypy): pyspark.sql.group
Finished test(pypy): pyspark.sql.functions (30s)
Starting test(pypy): pyspark.sql.readwriter
Finished test(pypy): pyspark.sql.group (15s)
Starting test(pypy): pyspark.sql.session
Finished test(pypy): pyspark.sql.readwriter (24s)
Starting test(pypy): pyspark.sql.streaming
Finished test(pypy): pyspark.sql.session (14s)
Starting test(pypy): pyspark.sql.types
Finished test(python3.4): pyspark.sql.tests (285s)
Starting test(pypy): pyspark.sql.window
Finished test(pypy): pyspark.sql.types (4s)
Starting test(pypy): pyspark.streaming.util
Finished test(pypy): pyspark.streaming.util (1s)
Starting test(python2.6): pyspark.accumulators
Finished test(pypy): pyspark.sql.streaming (9s)
Starting test(python2.6): pyspark.broadcast
Finished test(pypy): pyspark.sql.window (4s)
Starting test(python2.6): pyspark.conf
Finished test(python2.6): pyspark.accumulators (4s)
Starting test(python2.6): pyspark.context
Finished test(python2.6): pyspark.conf (3s)
Starting test(python2.6): pyspark.ml.classification
Finished test(python2.6): pyspark.broadcast (4s)
Starting test(python2.6): pyspark.ml.clustering
Finished test(python2.6): pyspark.context (15s)
Starting test(python2.6): pyspark.ml.evaluation
Finished test(python3.4): pyspark.streaming.tests (270s)
Starting test(python2.6): pyspark.ml.feature
Finished test(python2.6): pyspark.ml.clustering (24s)
Starting test(python2.6): pyspark.ml.linalg.__init__
Finished test(python2.6): pyspark.ml.linalg.__init__ (0s)
Starting test(python2.6): pyspark.ml.recommendation
Finished test(python2.6): pyspark.ml.evaluation (11s)
Starting test(python2.6): pyspark.ml.regression
Finished test(python2.6): pyspark.ml.classification (33s)
Starting test(python2.6): pyspark.ml.tests
Finished test(python2.6): pyspark.ml.recommendation (17s)
Starting test(python2.6): pyspark.ml.tuning
Finished test(python2.6): pyspark.ml.regression (26s)
Starting test(python2.6): pyspark.mllib.classification
Finished test(python2.6): pyspark.ml.feature (36s)
Starting test(python2.6): pyspark.mllib.clustering
Finished test(python2.6): pyspark.ml.tuning (16s)
Starting test(python2.6): pyspark.mllib.evaluation
Finished test(python2.6): pyspark.mllib.evaluation (12s)
Starting test(python2.6): pyspark.mllib.feature
Finished test(python2.6): pyspark.mllib.classification (22s)
Starting test(python2.6): pyspark.mllib.fpm
Finished test(python2.6): pyspark.mllib.clustering (33s)
Starting test(python2.6): pyspark.mllib.linalg.__init__
Finished test(python2.6): pyspark.mllib.linalg.__init__ (0s)
Starting test(python2.6): pyspark.mllib.linalg.distributed
Finished test(python2.6): pyspark.mllib.feature (14s)
Starting test(python2.6): pyspark.mllib.random
Finished test(python2.6): pyspark.mllib.fpm (12s)
Starting test(python2.6): pyspark.mllib.recommendation
Finished test(python2.6): pyspark.mllib.random (6s)
Starting test(python2.6): pyspark.mllib.regression
Finished test(python2.6): pyspark.mllib.recommendation (20s)
Starting test(python2.6): pyspark.mllib.stat.KernelDensity
Finished test(python2.6): pyspark.mllib.stat.KernelDensity (0s)
Starting test(python2.6): pyspark.mllib.stat._statistics
Finished test(python2.6): pyspark.ml.tests (78s)
Starting test(python2.6): pyspark.mllib.tree
Finished test(python2.6): pyspark.mllib.regression (19s)
Starting test(python2.6): pyspark.mllib.util
Finished test(python2.6): pyspark.mllib.linalg.distributed (31s)
Starting test(python2.6): pyspark.profiler
Finished test(python2.6): pyspark.mllib.stat._statistics (10s)
Starting test(python2.6): pyspark.rdd
Finished test(python2.6): pyspark.profiler (5s)
Starting test(python2.6): pyspark.serializers
Finished test(python2.6): pyspark.mllib.util (12s)
Starting test(python2.6): pyspark.shuffle
Finished test(python2.6): pyspark.shuffle (0s)
Starting test(python2.6): pyspark.sql.catalog
Finished test(python2.6): pyspark.mllib.tree (13s)
Starting test(python2.6): pyspark.sql.column
Finished test(python2.6): pyspark.serializers (7s)
Starting test(python2.6): pyspark.sql.conf
Finished test(python2.6): pyspark.sql.conf (3s)
Starting test(python2.6): pyspark.sql.context
Finished test(python2.6): pyspark.sql.catalog (9s)
Starting test(python2.6): pyspark.sql.dataframe
Finished test(python2.6): pyspark.sql.column (12s)
Starting test(python2.6): pyspark.sql.functions
Finished test(python2.6): pyspark.rdd (27s)
Starting test(python2.6): pyspark.sql.group
Finished test(python2.6): pyspark.sql.context (13s)
Starting test(python2.6): pyspark.sql.readwriter
Finished test(python2.6): pyspark.sql.group (17s)
Starting test(python2.6): pyspark.sql.session
Finished test(python2.6): pyspark.sql.functions (27s)
Starting test(python2.6): pyspark.sql.streaming
Finished test(python2.6): pyspark.sql.dataframe (31s)
Starting test(python2.6): pyspark.sql.types
Finished test(python2.6): pyspark.sql.readwriter (20s)
Starting test(python2.6): pyspark.sql.window
Finished test(python2.6): pyspark.sql.types (3s)
Starting test(python2.6): pyspark.streaming.util
Finished test(python2.6): pyspark.streaming.util (0s)
Starting test(python3.4): pyspark.accumulators
Finished test(python2.6): pyspark.sql.window (3s)
Starting test(python3.4): pyspark.broadcast
Finished test(python2.6): pyspark.sql.streaming (8s)
Starting test(python3.4): pyspark.conf
Finished test(python3.4): pyspark.accumulators (5s)
Starting test(python3.4): pyspark.context
Finished test(python3.4): pyspark.broadcast (5s)
Starting test(python3.4): pyspark.ml.classification
Finished test(python2.6): pyspark.sql.session (13s)
Starting test(python3.4): pyspark.ml.clustering
Finished test(python3.4): pyspark.conf (3s)
Starting test(python3.4): pyspark.ml.evaluation
Finished test(python3.4): pyspark.ml.evaluation (12s)
Starting test(python3.4): pyspark.ml.feature
Finished test(python3.4): pyspark.context (17s)
Starting test(python3.4): pyspark.ml.linalg.__init__
Finished test(python3.4): pyspark.ml.linalg.__init__ (0s)
Starting test(python3.4): pyspark.ml.recommendation
Finished test(python3.4): pyspark.ml.clustering (26s)
Starting test(python3.4): pyspark.ml.regression
Finished test(python3.4): pyspark.ml.classification (35s)
Starting test(python3.4): pyspark.ml.tests
Finished test(python3.4): pyspark.ml.recommendation (19s)
Starting test(python3.4): pyspark.ml.tuning
Finished test(python3.4): pyspark.ml.regression (27s)
Starting test(python3.4): pyspark.mllib.classification
Finished test(python3.4): pyspark.ml.feature (39s)
Starting test(python3.4): pyspark.mllib.clustering
Finished test(python3.4): pyspark.ml.tuning (18s)
Starting test(python3.4): pyspark.mllib.evaluation
Finished test(python3.4): pyspark.mllib.evaluation (13s)
Starting test(python3.4): pyspark.mllib.feature
Finished test(python3.4): pyspark.mllib.classification (23s)
Starting test(python3.4): pyspark.mllib.fpm
Finished test(python3.4): pyspark.mllib.feature (17s)
Starting test(python3.4): pyspark.mllib.linalg.__init__
Finished test(python3.4): pyspark.mllib.linalg.__init__ (0s)
Starting test(python3.4): pyspark.mllib.linalg.distributed
Finished test(python3.4): pyspark.mllib.clustering (35s)
Starting test(python3.4): pyspark.mllib.random
Finished test(python3.4): pyspark.mllib.fpm (14s)
Starting test(python3.4): pyspark.mllib.recommendation
Finished test(python3.4): pyspark.mllib.random (8s)
Starting test(python3.4): pyspark.mllib.regression
Finished test(python3.4): pyspark.mllib.recommendation (24s)
Starting test(python3.4): pyspark.mllib.stat.KernelDensity
Finished test(python3.4): pyspark.mllib.stat.KernelDensity (0s)
Starting test(python3.4): pyspark.mllib.stat._statistics
Finished test(python3.4): pyspark.mllib.linalg.distributed (32s)
Starting test(python3.4): pyspark.mllib.tree
Finished test(python3.4): pyspark.mllib.regression (22s)
Starting test(python3.4): pyspark.mllib.util
Finished test(python3.4): pyspark.ml.tests (87s)
Starting test(python3.4): pyspark.profiler
Finished test(python3.4): pyspark.profiler (7s)
Starting test(python3.4): pyspark.rdd
Finished test(python3.4): pyspark.mllib.stat._statistics (12s)
Starting test(python3.4): pyspark.serializers
Finished test(python3.4): pyspark.mllib.util (16s)
Starting test(python3.4): pyspark.shuffle
Finished test(python3.4): pyspark.mllib.tree (18s)
Starting test(python3.4): pyspark.sql.catalog
Finished test(python3.4): pyspark.shuffle (0s)
Starting test(python3.4): pyspark.sql.column
Finished test(python3.4): pyspark.serializers (10s)
Starting test(python3.4): pyspark.sql.conf
Finished test(python3.4): pyspark.sql.conf (4s)
Starting test(python3.4): pyspark.sql.context
Finished test(python3.4): pyspark.sql.catalog (11s)
Starting test(python3.4): pyspark.sql.dataframe
Finished test(python3.4): pyspark.sql.column (13s)
Starting test(python3.4): pyspark.sql.functions
Finished test(python3.4): pyspark.rdd (31s)
Starting test(python3.4): pyspark.sql.group
Finished test(python3.4): pyspark.sql.context (18s)
Starting test(python3.4): pyspark.sql.readwriter
Finished test(python3.4): pyspark.sql.group (20s)
Starting test(python3.4): pyspark.sql.session
Finished test(python3.4): pyspark.sql.functions (32s)
Starting test(python3.4): pyspark.sql.streaming
Finished test(python3.4): pyspark.sql.dataframe (35s)
Starting test(python3.4): pyspark.sql.types
Finished test(python3.4): pyspark.sql.readwriter (25s)
Starting test(python3.4): pyspark.sql.window
Finished test(python3.4): pyspark.sql.types (3s)
Starting test(python3.4): pyspark.streaming.util
Finished test(python3.4): pyspark.streaming.util (0s)
Finished test(python3.4): pyspark.sql.window (4s)
Finished test(python3.4): pyspark.sql.streaming (10s)
Finished test(python3.4): pyspark.sql.session (15s)
Tests passed in 1093 seconds

========================================================================
Running PySpark packaging tests
========================================================================
Constucting virtual env for testing
Missing virtualenv skipping pip installability tests.
Cleaning up temporary directory - /tmp/tmp.2c3gmRvyoN

========================================================================
Running SparkR tests
========================================================================
Loading required package: methods

Attaching package: 'SparkR'

The following object is masked from 'package:testthat':

    describe

The following objects are masked from 'package:stats':

    cov, filter, lag, na.omit, predict, sd, var, window

The following objects are masked from 'package:base':

    as.data.frame, colnames, colnames<-, drop, intersect, rank, rbind,
    sample, subset, summary, transform, union

Spark package found in SPARK_HOME: /home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2
basic tests for CRAN: ...........

DONE ===========================================================================
SerDe functionality: Spark package found in SPARK_HOME: /home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2
...................
Windows-specific tests: S
functions on binary files: Spark package found in SPARK_HOME: /home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2
....
binary functions: Spark package found in SPARK_HOME: /home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2
...........
broadcast variables: Spark package found in SPARK_HOME: /home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2
..
functions in client.R: .....
test functions in sparkR.R: ................................
include R packages: Spark package found in SPARK_HOME: /home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2

JVM API: Spark package found in SPARK_HOME: /home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2
..
MLlib functions: Spark package found in SPARK_HOME: /home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2
......................................SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
...................................................................................................................................................................................................................................
parallelize() and collect(): Spark package found in SPARK_HOME: /home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2
....................
[Stage 4:>                                                          (0 + 0) / 3]
                                                                                
.........
basic RDD functions: Spark package found in SPARK_HOME: /home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2
.......
[Stage 6:>                                                          (0 + 2) / 2]
                                                                                
.
[Stage 8:>                                                          (0 + 0) / 2]
                                                                                
...................
[Stage 18:>                                                         (0 + 2) / 2]
[Stage 18:=============================>                            (1 + 1) / 2]
                                                                                
.....................................................................................................................................................................................................................................................................................................................
[Stage 52:>                                                         (0 + 2) / 2]
                                                                                
......
[Stage 58:=============================>                            (2 + 2) / 4]
                                                                                
..
[Stage 60:>                                                         (0 + 4) / 4]
                                                                                
...
[Stage 65:>                                                         (0 + 2) / 2]
                                                                                
....................
[Stage 90:>                                                         (0 + 2) / 2]
                                                                                
.......
[Stage 97:>                                                         (0 + 2) / 2]
                                                                                
.
[Stage 98:>                                                         (0 + 4) / 4]
[Stage 98:=============================>                            (2 + 2) / 4]
                                                                                
......
[Stage 109:>                                                        (0 + 2) / 2]
                                                                                
.......
[Stage 123:>                                                        (0 + 2) / 2]
                                                                                
......
[Stage 135:>                                                        (0 + 2) / 2]
                                                                                
..
[Stage 139:>                                                        (0 + 2) / 2]
                                                                                
......
[Stage 151:>                                                        (0 + 2) / 2]
                                                                                
..
[Stage 159:============================>                            (1 + 1) / 2]
                                                                                
.......................
[Stage 201:>                                                     (0 + 32) / 100]
[Stage 201:=>                                                    (2 + 32) / 100]
[Stage 201:===========>                                         (21 + 32) / 100]
[Stage 201:================>                                    (32 + 32) / 100]
[Stage 201:=================>                                   (33 + 32) / 100]
[Stage 201:=========================>                           (48 + 32) / 100]
[Stage 201:===============================>                     (60 + 32) / 100]
[Stage 201:=================================>                   (64 + 32) / 100]
[Stage 201:=========================================>           (78 + 22) / 100]
[Stage 201:===============================================>     (90 + 10) / 100]
[Stage 201:===================================================>  (96 + 4) / 100]
                                                                                
.
partitionBy, groupByKey, reduceByKey etc.: Spark package found in SPARK_HOME: /home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2
.........
[Stage 18:>                                                         (0 + 2) / 2]
[Stage 18:=============================>                            (1 + 1) / 2]
                                                                                
.
[Stage 20:=============================>                            (1 + 1) / 2]
                                                                                
..........
functions in sparkR.R: ....
SparkSQL functions: Spark package found in SPARK_HOME: /home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2
.................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
[Stage 949:=>                                                    (6 + 32) / 200]
[Stage 949:=======>                                             (28 + 32) / 200]
[Stage 949:========>                                            (32 + 32) / 200]
[Stage 949:=========>                                           (35 + 32) / 200]
[Stage 949:===========>                                         (44 + 32) / 200]
[Stage 949:==============>                                      (56 + 32) / 200]
[Stage 949:=================>                                   (65 + 32) / 200]
[Stage 949:===================>                                 (72 + 32) / 200]
[Stage 949:====================>                                (78 + 33) / 200]
[Stage 949:=========================>                           (95 + 32) / 200]
[Stage 949:==========================>                         (101 + 32) / 200]
[Stage 949:===========================>                        (107 + 32) / 200]
[Stage 949:==============================>                     (117 + 32) / 200]
[Stage 949:==================================>                 (133 + 32) / 200]
[Stage 949:====================================>               (139 + 32) / 200]
[Stage 949:=====================================>              (143 + 32) / 200]
[Stage 949:=========================================>          (158 + 32) / 200]
[Stage 949:===========================================>        (168 + 32) / 200]
[Stage 949:============================================>       (173 + 27) / 200]
[Stage 949:===============================================>    (183 + 17) / 200]
[Stage 949:=================================================>  (189 + 11) / 200]
                                                                                
............
[Stage 955:>                                                     (2 + 32) / 200]
[Stage 955:===>                                                 (13 + 32) / 200]
[Stage 955:========>                                            (32 + 32) / 200]
[Stage 955:=========>                                           (35 + 32) / 200]
[Stage 955:============>                                        (49 + 32) / 200]
[Stage 955:================>                                    (62 + 32) / 200]
[Stage 955:=================>                                   (65 + 32) / 200]
[Stage 955:==================>                                  (70 + 32) / 200]
[Stage 955:====================>                                (79 + 32) / 200]
[Stage 955:=======================>                             (87 + 32) / 200]
[Stage 955:=========================>                           (96 + 32) / 200]
[Stage 955:==========================>                         (103 + 32) / 200]
[Stage 955:=============================>                      (112 + 32) / 200]
[Stage 955:===============================>                    (120 + 32) / 200]
[Stage 955:=================================>                  (130 + 32) / 200]
[Stage 955:==================================>                 (134 + 32) / 200]
[Stage 955:======================================>             (147 + 32) / 200]
[Stage 955:========================================>           (156 + 32) / 200]
[Stage 955:==========================================>         (162 + 32) / 200]
[Stage 955:============================================>       (170 + 30) / 200]
[Stage 955:=============================================>      (176 + 24) / 200]
[Stage 955:================================================>   (188 + 12) / 200]
                                                                                
.
[Stage 957:===>                                                 (12 + 32) / 200]
[Stage 957:=======>                                             (30 + 32) / 200]
[Stage 957:========>                                            (32 + 32) / 200]
[Stage 957:=========>                                           (36 + 32) / 200]
[Stage 957:============>                                        (46 + 33) / 200]
[Stage 957:==================>                                  (71 + 32) / 200]
[Stage 957:===================>                                 (72 + 32) / 200]
[Stage 957:======================>                              (84 + 32) / 200]
[Stage 957:=========================>                           (96 + 32) / 200]
[Stage 957:===========================>                        (104 + 32) / 200]
[Stage 957:===========================>                        (105 + 32) / 200]
[Stage 957:==============================>                     (117 + 32) / 200]
[Stage 957:=================================>                  (128 + 32) / 200]
[Stage 957:==================================>                 (134 + 32) / 200]
[Stage 957:====================================>               (139 + 32) / 200]
[Stage 957:=====================================>              (145 + 32) / 200]
[Stage 957:==========================================>         (163 + 32) / 200]
[Stage 957:===========================================>        (169 + 31) / 200]
[Stage 957:=============================================>      (176 + 24) / 200]
[Stage 957:=================================================>  (190 + 10) / 200]
                                                                                
..
[Stage 961:>                                                     (0 + 32) / 200]
[Stage 961:==>                                                  (10 + 32) / 200]
[Stage 961:======>                                              (24 + 32) / 200]
[Stage 961:========>                                            (32 + 32) / 200]
[Stage 961:=========>                                           (35 + 32) / 200]
[Stage 961:============>                                        (46 + 32) / 200]
[Stage 961:===============>                                     (57 + 32) / 200]
[Stage 961:================>                                    (62 + 32) / 200]
[Stage 961:==================>                                  (69 + 32) / 200]
[Stage 961:====================>                                (77 + 32) / 200]
[Stage 961:=======================>                             (90 + 32) / 200]
[Stage 961:========================>                            (94 + 32) / 200]
[Stage 961:==========================>                         (102 + 32) / 200]
[Stage 961:============================>                       (111 + 32) / 200]
[Stage 961:===============================>                    (122 + 32) / 200]
[Stage 961:================================>                   (126 + 32) / 200]
[Stage 961:==================================>                 (132 + 32) / 200]
[Stage 961:=====================================>              (144 + 32) / 200]
[Stage 961:======================================>             (147 + 32) / 200]
[Stage 961:========================================>           (156 + 32) / 200]
[Stage 961:=========================================>          (159 + 32) / 200]
[Stage 961:============================================>       (173 + 27) / 200]
[Stage 961:===============================================>    (182 + 18) / 200]
[Stage 961:================================================>   (188 + 12) / 200]
                                                                                
.
[Stage 963:>                                                     (0 + 32) / 200]
[Stage 963:===>                                                 (12 + 32) / 200]
[Stage 963:======>                                              (25 + 32) / 200]
[Stage 963:========>                                            (31 + 32) / 200]
[Stage 963:========>                                            (32 + 32) / 200]
[Stage 963:=========>                                           (35 + 32) / 200]
[Stage 963:=============>                                       (52 + 32) / 200]
[Stage 963:================>                                    (64 + 32) / 200]
[Stage 963:=================>                                   (65 + 32) / 200]
[Stage 963:===================>                                 (73 + 32) / 200]
[Stage 963:=====================>                               (83 + 32) / 200]
[Stage 963:=========================>                           (95 + 32) / 200]
[Stage 963:=========================>                           (97 + 32) / 200]
[Stage 963:===========================>                        (105 + 32) / 200]
[Stage 963:=============================>                      (113 + 32) / 200]
[Stage 963:==============================>                     (118 + 32) / 200]
[Stage 963:================================>                   (125 + 32) / 200]
[Stage 963:==================================>                 (132 + 32) / 200]
[Stage 963:====================================>               (140 + 32) / 200]
[Stage 963:======================================>             (148 + 32) / 200]
[Stage 963:========================================>           (154 + 32) / 200]
[Stage 963:=========================================>          (158 + 32) / 200]
[Stage 963:=========================================>          (159 + 32) / 200]
[Stage 963:=============================================>      (174 + 26) / 200]
[Stage 963:===============================================>    (184 + 16) / 200]
[Stage 963:==================================================>  (192 + 8) / 200]
                                                                                
.
[Stage 965:=>                                                    (4 + 38) / 200]
[Stage 965:======>                                              (25 + 34) / 200]
[Stage 965:========>                                            (32 + 32) / 200]
[Stage 965:==========>                                          (38 + 32) / 200]
[Stage 965:===============>                                     (58 + 32) / 200]
[Stage 965:================>                                    (63 + 33) / 200]
[Stage 965:=================>                                   (65 + 32) / 200]
[Stage 965:=====================>                               (80 + 32) / 200]
[Stage 965:========================>                            (92 + 32) / 200]
[Stage 965:=========================>                           (96 + 32) / 200]
[Stage 965:==========================>                         (103 + 32) / 200]
[Stage 965:==============================>                     (117 + 32) / 200]
[Stage 965:================================>                   (124 + 32) / 200]
[Stage 965:=================================>                  (128 + 32) / 200]
[Stage 965:==================================>                 (134 + 32) / 200]
[Stage 965:======================================>             (148 + 32) / 200]
[Stage 965:========================================>           (154 + 32) / 200]
[Stage 965:=========================================>          (160 + 32) / 200]
[Stage 965:==========================================>         (165 + 32) / 200]
[Stage 965:==============================================>     (177 + 23) / 200]
[Stage 965:================================================>   (187 + 13) / 200]
[Stage 965:====================================================>(199 + 1) / 200]
                                                                                
.
[Stage 968:>                                                     (0 + 32) / 200]
[Stage 968:==>                                                  (11 + 32) / 200]
[Stage 968:=======>                                             (27 + 32) / 200]
[Stage 968:=======>                                             (30 + 32) / 200]
[Stage 968:========>                                            (32 + 32) / 200]
[Stage 968:==========>                                          (39 + 32) / 200]
[Stage 968:=============>                                       (50 + 32) / 200]
[Stage 968:================>                                    (61 + 32) / 200]
[Stage 968:================>                                    (64 + 32) / 200]
[Stage 968:=================>                                   (66 + 32) / 200]
[Stage 968:===================>                                 (72 + 32) / 200]
[Stage 968:======================>                              (84 + 32) / 200]
[Stage 968:========================>                            (93 + 32) / 200]
[Stage 968:=========================>                           (97 + 32) / 200]
[Stage 968:==========================>                         (101 + 32) / 200]
[Stage 968:=============================>                      (115 + 32) / 200]
[Stage 968:===============================>                    (123 + 32) / 200]
[Stage 968:=================================>                  (130 + 32) / 200]
[Stage 968:==================================>                 (131 + 32) / 200]
[Stage 968:====================================>               (140 + 32) / 200]
[Stage 968:========================================>           (154 + 32) / 200]
[Stage 968:=========================================>          (159 + 32) / 200]
[Stage 968:==========================================>         (165 + 32) / 200]
[Stage 968:============================================>       (173 + 27) / 200]
[Stage 968:================================================>   (187 + 13) / 200]
[Stage 968:====================================================>(197 + 3) / 200]
                                                                                
.
[Stage 971:>                                                     (0 + 34) / 200]
[Stage 971:=====>                                               (19 + 33) / 200]
[Stage 971:========>                                            (32 + 32) / 200]
[Stage 971:=========>                                           (35 + 32) / 200]
[Stage 971:=============>                                       (52 + 33) / 200]
[Stage 971:================>                                    (62 + 32) / 200]
[Stage 971:================>                                    (63 + 32) / 200]
[Stage 971:==================>                                  (70 + 32) / 200]
[Stage 971:====================>                                (78 + 32) / 200]
[Stage 971:========================>                            (91 + 32) / 200]
[Stage 971:========================>                            (94 + 32) / 200]
[Stage 971:=========================>                           (97 + 33) / 200]
[Stage 971:============================>                       (108 + 33) / 200]
[Stage 971:===============================>                    (120 + 32) / 200]
[Stage 971:================================>                   (126 + 32) / 200]
[Stage 971:=================================>                  (130 + 32) / 200]
[Stage 971:====================================>               (141 + 32) / 200]
[Stage 971:=======================================>            (150 + 32) / 200]
[Stage 971:========================================>           (157 + 32) / 200]
[Stage 971:==========================================>         (162 + 32) / 200]
[Stage 971:============================================>       (172 + 28) / 200]
[Stage 971:==============================================>     (180 + 20) / 200]
[Stage 971:=================================================>  (190 + 10) / 200]
                                                                                
..............................................................
tests RDD function take(): Spark package found in SPARK_HOME: /home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2
................
the textFile() function: Spark package found in SPARK_HOME: /home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2
....
[Stage 3:>                                                          (0 + 2) / 2]
                                                                                
...
[Stage 10:>                                                         (0 + 2) / 2]
                                                                                
......
functions in utils.R: Spark package found in SPARK_HOME: /home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2
..............................................

Skipped ------------------------------------------------------------------------
1. sparkJars tag in SparkContext (@test_Windows.R#21) - This test is only for Windows, skipped

DONE ===========================================================================
Using R_SCRIPT_PATH = /usr/bin
Using Scala 2.11
/home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2/R /home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2/R
+++ dirname ./install-dev.sh
++ cd .
++ pwd
+ FWDIR=/home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2/R
+ LIB_DIR=/home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2/R/lib
+ mkdir -p /home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2/R/lib
+ pushd /home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2/R
+ '[' '!' -z '' ']'
++ command -v R
+ '[' '!' /usr/bin/R ']'
+++ which R
++ dirname /usr/bin/R
+ R_SCRIPT_PATH=/usr/bin
+ echo 'Using R_SCRIPT_PATH = /usr/bin'
Using R_SCRIPT_PATH = /usr/bin
+ /usr/bin/Rscript -e ' if("devtools" %in% rownames(installed.packages())) { library(devtools); devtools::document(pkg="./pkg", roclets=c("rd")) }'
Updating SparkR documentation
Loading SparkR
Loading required package: methods
Creating a new generic function for 'as.data.frame' in package 'SparkR'
Creating a new generic function for 'colnames' in package 'SparkR'
Creating a new generic function for 'colnames<-' in package 'SparkR'
Creating a new generic function for 'cov' in package 'SparkR'
Creating a new generic function for 'drop' in package 'SparkR'
Creating a new generic function for 'na.omit' in package 'SparkR'
Creating a new generic function for 'filter' in package 'SparkR'
Creating a new generic function for 'intersect' in package 'SparkR'
Creating a new generic function for 'sample' in package 'SparkR'
Creating a new generic function for 'transform' in package 'SparkR'
Creating a new generic function for 'subset' in package 'SparkR'
Creating a new generic function for 'summary' in package 'SparkR'
Creating a new generic function for 'union' in package 'SparkR'
Creating a new generic function for 'lag' in package 'SparkR'
Creating a new generic function for 'rank' in package 'SparkR'
Creating a new generic function for 'sd' in package 'SparkR'
Creating a new generic function for 'var' in package 'SparkR'
Creating a new generic function for 'window' in package 'SparkR'
Creating a new generic function for 'predict' in package 'SparkR'
Creating a new generic function for 'rbind' in package 'SparkR'
Creating a generic function for 'lapply' from package 'base' in package 'SparkR'
Creating a generic function for 'Filter' from package 'base' in package 'SparkR'
Creating a generic function for 'alias' from package 'stats' in package 'SparkR'
Creating a generic function for 'substr' from package 'base' in package 'SparkR'
Creating a generic function for '%in%' from package 'base' in package 'SparkR'
Creating a generic function for 'mean' from package 'base' in package 'SparkR'
Creating a generic function for 'unique' from package 'base' in package 'SparkR'
Creating a generic function for 'nrow' from package 'base' in package 'SparkR'
Creating a generic function for 'ncol' from package 'base' in package 'SparkR'
Creating a generic function for 'head' from package 'utils' in package 'SparkR'
Creating a generic function for 'factorial' from package 'base' in package 'SparkR'
Creating a generic function for 'atan2' from package 'base' in package 'SparkR'
Creating a generic function for 'ifelse' from package 'base' in package 'SparkR'
Warning messages:
1: In check_dep_version(pkg, version, compare) :
  Need roxygen2 >= 5.0.0 but loaded version is 4.1.1
2: In check_dep_version(pkg, version, compare) :
  Need roxygen2 >= 5.0.0 but loaded version is 4.1.1
+ /usr/bin/R CMD INSTALL --library=/home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2/R/lib /home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2/R/pkg/
* installing *source* package 'SparkR' ...
** R
** inst
** preparing package for lazy loading
Creating a new generic function for 'as.data.frame' in package 'SparkR'
Creating a new generic function for 'colnames' in package 'SparkR'
Creating a new generic function for 'colnames<-' in package 'SparkR'
Creating a new generic function for 'cov' in package 'SparkR'
Creating a new generic function for 'drop' in package 'SparkR'
Creating a new generic function for 'na.omit' in package 'SparkR'
Creating a new generic function for 'filter' in package 'SparkR'
Creating a new generic function for 'intersect' in package 'SparkR'
Creating a new generic function for 'sample' in package 'SparkR'
Creating a new generic function for 'transform' in package 'SparkR'
Creating a new generic function for 'subset' in package 'SparkR'
Creating a new generic function for 'summary' in package 'SparkR'
Creating a new generic function for 'union' in package 'SparkR'
Creating a new generic function for 'lag' in package 'SparkR'
Creating a new generic function for 'rank' in package 'SparkR'
Creating a new generic function for 'sd' in package 'SparkR'
Creating a new generic function for 'var' in package 'SparkR'
Creating a new generic function for 'window' in package 'SparkR'
Creating a new generic function for 'predict' in package 'SparkR'
Creating a new generic function for 'rbind' in package 'SparkR'
Creating a generic function for 'lapply' from package 'base' in package 'SparkR'
Creating a generic function for 'Filter' from package 'base' in package 'SparkR'
Creating a generic function for 'alias' from package 'stats' in package 'SparkR'
Creating a generic function for 'substr' from package 'base' in package 'SparkR'
Creating a generic function for '%in%' from package 'base' in package 'SparkR'
Creating a generic function for 'mean' from package 'base' in package 'SparkR'
Creating a generic function for 'unique' from package 'base' in package 'SparkR'
Creating a generic function for 'nrow' from package 'base' in package 'SparkR'
Creating a generic function for 'ncol' from package 'base' in package 'SparkR'
Creating a generic function for 'head' from package 'utils' in package 'SparkR'
Creating a generic function for 'factorial' from package 'base' in package 'SparkR'
Creating a generic function for 'atan2' from package 'base' in package 'SparkR'
Creating a generic function for 'ifelse' from package 'base' in package 'SparkR'
** help
*** installing help indices
  converting help for package 'SparkR'
    finding HTML links ... done
    AFTSurvivalRegressionModel-class        html  
    ALSModel-class                          html  
    GBTClassificationModel-class            html  
    GBTRegressionModel-class                html  
    GaussianMixtureModel-class              html  
    GeneralizedLinearRegressionModel-class
                                            html  
    GroupedData                             html  
    IsotonicRegressionModel-class           html  
    KMeansModel-class                       html  
    KSTest-class                            html  
    LDAModel-class                          html  
    LogisticRegressionModel-class           html  
    MultilayerPerceptronClassificationModel-class
                                            html  
    NaiveBayesModel-class                   html  
    RandomForestClassificationModel-class   html  
    RandomForestRegressionModel-class       html  
    SparkDataFrame                          html  
    WindowSpec                              html  
    abs                                     html  
    acos                                    html  
    add_months                              html  
    alias                                   html  
    approxCountDistinct                     html  
    approxQuantile                          html  
    arrange                                 html  
    array_contains                          html  
    as.data.frame                           html  
    ascii                                   html  
    asin                                    html  
    atan                                    html  
    atan2                                   html  
    attach                                  html  
    avg                                     html  
    base64                                  html  
    between                                 html  
    bin                                     html  
    bitwiseNOT                              html  
    bround                                  html  
    cache                                   html  
    cacheTable                              html  
    cancelJobGroup                          html  
    cast                                    html  
    cbrt                                    html  
    ceil                                    html  
    clearCache                              html  
    clearJobGroup                           html  
    coalesce                                html  
    collect                                 html  
    coltypes                                html  
    column                                  html  
    columnfunctions                         html  
    columns                                 html  
    concat                                  html  
    concat_ws                               html  
    conv                                    html  
    corr                                    html  
    cos                                     html  
    cosh                                    html  
    count                                   html  
    countDistinct                           html  
    cov                                     html  
    covar_pop                               html  
    crc32                                   html  
    createDataFrame                         html  
    createExternalTable                     html  
    createOrReplaceTempView                 html  
    crossJoin                               html  
    crosstab                                html  
    cume_dist                               html  
    dapply                                  html  
    dapplyCollect                           html  
    date_add                                html  
    date_format                             html  
    date_sub                                html  
    datediff                                html  
    dayofmonth                              html  
    dayofyear                               html  
    decode                                  html  
    dense_rank                              html  
    dim                                     html  
    distinct                                html  
    drop                                    html  
    dropDuplicates                          html  
    dropTempTable-deprecated                html  
    dropTempView                            html  
    dtypes                                  html  
    encode                                  html  
    endsWith                                html  
    except                                  html  
    exp                                     html  
    explain                                 html  
    explode                                 html  
    expm1                                   html  
    expr                                    html  
    factorial                               html  
    filter                                  html  
    first                                   html  
    fitted                                  html  
    floor                                   html  
    format_number                           html  
    format_string                           html  
    freqItems                               html  
    from_unixtime                           html  
    from_utc_timestamp                      html  
    gapply                                  html  
    gapplyCollect                           html  
    generateAliasesForIntersectedCols       html  
    getNumPartitions                        html  
    glm                                     html  
    greatest                                html  
    groupBy                                 html  
    hash                                    html  
    hashCode                                html  
    head                                    html  
    hex                                     html  
    histogram                               html  
    hour                                    html  
    hypot                                   html  
    ifelse                                  html  
    initcap                                 html  
    insertInto                              html  
    install.spark                           html  
    instr                                   html  
    intersect                               html  
    is.nan                                  html  
    isLocal                                 html  
    join                                    html  
    kurtosis                                html  
    lag                                     html  
    last                                    html  
    last_day                                html  
    lead                                    html  
    least                                   html  
    length                                  html  
    levenshtein                             html  
    limit                                   html  
    lit                                     html  
    locate                                  html  
    log                                     html  
    log10                                   html  
    log1p                                   html  
    log2                                    html  
    lower                                   html  
    lpad                                    html  
    ltrim                                   html  
    match                                   html  
    max                                     html  
    md5                                     html  
    mean                                    html  
    merge                                   html  
    min                                     html  
    minute                                  html  
    monotonically_increasing_id             html  
    month                                   html  
    months_between                          html  
    mutate                                  html  
    nafunctions                             html  
    nanvl                                   html  
    ncol                                    html  
    negate                                  html  
    next_day                                html  
    nrow                                    html  
    ntile                                   html  
    orderBy                                 html  
    otherwise                               html  
    over                                    html  
    partitionBy                             html  
    percent_rank                            html  
    persist                                 html  
    pivot                                   html  
    pmod                                    html  
    posexplode                              html  
    predict                                 html  
    print.jobj                              html  
    print.structField                       html  
    print.structType                        html  
    printSchema                             html  
    quarter                                 html  
    rand                                    html  
    randn                                   html  
    randomSplit                             html  
    rangeBetween                            html  
    rank                                    html  
    rbind                                   html  
    read.df                                 html  
    read.jdbc                               html  
    read.json                               html  
    read.ml                                 html  
    read.orc                                html  
    read.parquet                            html  
    read.text                               html  
    regexp_extract                          html  
    regexp_replace                          html  
    registerTempTable-deprecated            html  
    rename                                  html  
    repartition                             html  
    reverse                                 html  
    rint                                    html  
    round                                   html  
    row_number                              html  
    rowsBetween                             html  
    rpad                                    html  
    rtrim                                   html  
    sample                                  html  
    sampleBy                                html  
    saveAsTable                             html  
    schema                                  html  
    sd                                      html  
    second                                  html  
    select                                  html  
    selectExpr                              html  
    setJobGroup                             html  
    setLogLevel                             html  
    sha1                                    html  
    sha2                                    html  
    shiftLeft                               html  
    shiftRight                              html  
    shiftRightUnsigned                      html  
    show                                    html  
    showDF                                  html  
    sign                                    html  
    sin                                     html  
    sinh                                    html  
    size                                    html  
    skewness                                html  
    sort_array                              html  
    soundex                                 html  
    spark.addFile                           html  
    spark.als                               html  
    spark.gaussianMixture                   html  
    spark.gbt                               html  
    spark.getSparkFiles                     html  
    spark.getSparkFilesRootDirectory        html  
    spark.glm                               html  
    spark.isoreg                            html  
    spark.kmeans                            html  
    spark.kstest                            html  
    spark.lapply                            html  
    spark.lda                               html  
    spark.logit                             html  
    spark.mlp                               html  
    spark.naiveBayes                        html  
    spark.randomForest                      html  
    spark.survreg                           html  
    sparkR.callJMethod                      html  
    sparkR.callJStatic                      html  
    sparkR.conf                             html  
    sparkR.init-deprecated                  html  
    sparkR.newJObject                       html  
    sparkR.session                          html  
    sparkR.session.stop                     html  
    sparkR.uiWebUrl                         html  
    sparkR.version                          html  
    sparkRHive.init-deprecated              html  
    sparkRSQL.init-deprecated               html  
    spark_partition_id                      html  
    sql                                     html  
    sqrt                                    html  
    startsWith                              html  
    stddev_pop                              html  
    stddev_samp                             html  
    storageLevel                            html  
    str                                     html  
    struct                                  html  
    structField                             html  
    structType                              html  
    subset                                  html  
    substr                                  html  
    substring_index                         html  
    sum                                     html  
    sumDistinct                             html  
    summarize                               html  
    summary                                 html  
    tableNames                              html  
    tableToDF                               html  
    tables                                  html  
    take                                    html  
    tan                                     html  
    tanh                                    html  
    toDegrees                               html  
    toRadians                               html  
    to_date                                 html  
    to_utc_timestamp                        html  
    translate                               html  
    trim                                    html  
    unbase64                                html  
    uncacheTable                            html  
    unhex                                   html  
    union                                   html  
    unix_timestamp                          html  
    unpersist                               html  
    upper                                   html  
    var                                     html  
    var_pop                                 html  
    var_samp                                html  
    weekofyear                              html  
    when                                    html  
    window                                  html  
    windowOrderBy                           html  
    windowPartitionBy                       html  
    with                                    html  
    withColumn                              html  
    write.df                                html  
    write.jdbc                              html  
    write.json                              html  
    write.ml                                html  
    write.orc                               html  
    write.parquet                           html  
    write.text                              html  
    year                                    html  
** building package indices
** installing vignettes
** testing if installed package can be loaded
* DONE (SparkR)
+ cd /home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2/R/lib
+ jar cfM /home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2/R/lib/sparkr.zip SparkR
+ popd
/home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2/R/pkg/html /home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2/R /home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2/R
Loading required package: methods

Attaching package: 'SparkR'

The following objects are masked from 'package:stats':

    cov, filter, lag, na.omit, predict, sd, var, window

The following objects are masked from 'package:base':

    as.data.frame, colnames, colnames<-, drop, intersect, rank, rbind,
    sample, subset, summary, transform, union

** knitting documentation of AFTSurvivalRegressionModel-class
no examples found for AFTSurvivalRegressionModel-class
** knitting documentation of ALSModel-class
no examples found for ALSModel-class
** knitting documentation of GBTClassificationModel-class
no examples found for GBTClassificationModel-class
** knitting documentation of GBTRegressionModel-class
no examples found for GBTRegressionModel-class
** knitting documentation of GaussianMixtureModel-class
no examples found for GaussianMixtureModel-class
** knitting documentation of GeneralizedLinearRegressionModel-class
no examples found for GeneralizedLinearRegressionModel-class
** knitting documentation of GroupedData
no examples found for GroupedData
** knitting documentation of IsotonicRegressionModel-class
no examples found for IsotonicRegressionModel-class
** knitting documentation of KMeansModel-class
no examples found for KMeansModel-class
** knitting documentation of KSTest-class
no examples found for KSTest-class
** knitting documentation of LDAModel-class
no examples found for LDAModel-class
** knitting documentation of LogisticRegressionModel-class
no examples found for LogisticRegressionModel-class
** knitting documentation of MultilayerPerceptronClassificationModel-class
no examples found for MultilayerPerceptronClassificationModel-class
** knitting documentation of NaiveBayesModel-class
no examples found for NaiveBayesModel-class
** knitting documentation of RandomForestClassificationModel-class
no examples found for RandomForestClassificationModel-class
** knitting documentation of RandomForestRegressionModel-class
no examples found for RandomForestRegressionModel-class
** knitting documentation of SparkDataFrame
** knitting documentation of WindowSpec
no examples found for WindowSpec
** knitting documentation of abs
** knitting documentation of acos
** knitting documentation of add_months
** knitting documentation of alias
no examples found for alias
** knitting documentation of approxCountDistinct
** knitting documentation of approxQuantile
** knitting documentation of arrange
** knitting documentation of array_contains
** knitting documentation of as.data.frame
** knitting documentation of ascii
** knitting documentation of asin
** knitting documentation of atan
** knitting documentation of atan2
** knitting documentation of attach
** knitting documentation of avg
** knitting documentation of base64
** knitting documentation of between
no examples found for between
** knitting documentation of bin
** knitting documentation of bitwiseNOT
** knitting documentation of bround
** knitting documentation of cache
** knitting documentation of cacheTable
** knitting documentation of cancelJobGroup
** knitting documentation of cast
** knitting documentation of cbrt
** knitting documentation of ceil
** knitting documentation of clearCache
** knitting documentation of clearJobGroup
** knitting documentation of coalesce
** knitting documentation of collect
** knitting documentation of coltypes
** knitting documentation of column
** knitting documentation of columnfunctions
no examples found for columnfunctions
** knitting documentation of columns
** knitting documentation of concat
** knitting documentation of concat_ws
** knitting documentation of conv
** knitting documentation of corr
** knitting documentation of cos
** knitting documentation of cosh
** knitting documentation of count
** knitting documentation of countDistinct
** knitting documentation of cov
** knitting documentation of covar_pop
** knitting documentation of crc32
** knitting documentation of createDataFrame
** knitting documentation of createExternalTable
** knitting documentation of createOrReplaceTempView
** knitting documentation of crossJoin
** knitting documentation of crosstab
** knitting documentation of cume_dist
** knitting documentation of dapply
** knitting documentation of dapplyCollect
** knitting documentation of date_add
** knitting documentation of date_format
** knitting documentation of date_sub
** knitting documentation of datediff
** knitting documentation of dayofmonth
** knitting documentation of dayofyear
** knitting documentation of decode
** knitting documentation of dense_rank
** knitting documentation of dim
** knitting documentation of distinct
** knitting documentation of drop
** knitting documentation of dropDuplicates
** knitting documentation of dropTempTable-deprecated
** knitting documentation of dropTempView
** knitting documentation of dtypes
** knitting documentation of encode
** knitting documentation of endsWith
no examples found for endsWith
** knitting documentation of except
** knitting documentation of exp
** knitting documentation of explain
** knitting documentation of explode
** knitting documentation of expm1
** knitting documentation of expr
** knitting documentation of factorial
** knitting documentation of filter
** knitting documentation of first
** knitting documentation of fitted
** knitting documentation of floor
** knitting documentation of format_number
** knitting documentation of format_string
** knitting documentation of freqItems
** knitting documentation of from_unixtime
** knitting documentation of from_utc_timestamp
** knitting documentation of gapply
** knitting documentation of gapplyCollect
** knitting documentation of generateAliasesForIntersectedCols
no examples found for generateAliasesForIntersectedCols
** knitting documentation of getNumPartitions
** knitting documentation of glm
** knitting documentation of greatest
** knitting documentation of groupBy
** knitting documentation of hash
** knitting documentation of hashCode
** knitting documentation of head
** knitting documentation of hex
** knitting documentation of histogram
** knitting documentation of hour
** knitting documentation of hypot
** knitting documentation of ifelse
** knitting documentation of initcap
** knitting documentation of insertInto
** knitting documentation of install.spark
** knitting documentation of instr
** knitting documentation of intersect
** knitting documentation of is.nan
** knitting documentation of isLocal
** knitting documentation of join
** knitting documentation of kurtosis
** knitting documentation of lag
** knitting documentation of last
** knitting documentation of last_day
** knitting documentation of lead
** knitting documentation of least
** knitting documentation of length
** knitting documentation of levenshtein
** knitting documentation of limit
** knitting documentation of lit
** knitting documentation of locate
** knitting documentation of log
** knitting documentation of log10
** knitting documentation of log1p
** knitting documentation of log2
** knitting documentation of lower
** knitting documentation of lpad
** knitting documentation of ltrim
** knitting documentation of match
** knitting documentation of max
** knitting documentation of md5
** knitting documentation of mean
** knitting documentation of merge
** knitting documentation of min
** knitting documentation of minute
** knitting documentation of monotonically_increasing_id
** knitting documentation of month
** knitting documentation of months_between
** knitting documentation of mutate
** knitting documentation of nafunctions
** knitting documentation of nanvl
** knitting documentation of ncol
** knitting documentation of negate
** knitting documentation of next_day
** knitting documentation of nrow
** knitting documentation of ntile
** knitting documentation of orderBy
** knitting documentation of otherwise
no examples found for otherwise
** knitting documentation of over
** knitting documentation of partitionBy
** knitting documentation of percent_rank
** knitting documentation of persist
** knitting documentation of pivot
** knitting documentation of pmod
** knitting documentation of posexplode
** knitting documentation of predict
no examples found for predict
** knitting documentation of print.jobj
no examples found for print.jobj
** knitting documentation of print.structField
no examples found for print.structField
** knitting documentation of print.structType
no examples found for print.structType
** knitting documentation of printSchema
** knitting documentation of quarter
** knitting documentation of rand
** knitting documentation of randn
** knitting documentation of randomSplit
** knitting documentation of rangeBetween
** knitting documentation of rank
** knitting documentation of rbind
** knitting documentation of read.df
** knitting documentation of read.jdbc
** knitting documentation of read.json
** knitting documentation of read.ml
** knitting documentation of read.orc
no examples found for read.orc
** knitting documentation of read.parquet
no examples found for read.parquet
** knitting documentation of read.text
** knitting documentation of regexp_extract
** knitting documentation of regexp_replace
** knitting documentation of registerTempTable-deprecated
** knitting documentation of rename
** knitting documentation of repartition
** knitting documentation of reverse
** knitting documentation of rint
** knitting documentation of round
** knitting documentation of row_number
** knitting documentation of rowsBetween
** knitting documentation of rpad
** knitting documentation of rtrim
** knitting documentation of sample
** knitting documentation of sampleBy
** knitting documentation of saveAsTable
** knitting documentation of schema
** knitting documentation of sd
** knitting documentation of second
** knitting documentation of select
** knitting documentation of selectExpr
** knitting documentation of setJobGroup
** knitting documentation of setLogLevel
** knitting documentation of sha1
** knitting documentation of sha2
** knitting documentation of shiftLeft
** knitting documentation of shiftRight
** knitting documentation of shiftRightUnsigned
** knitting documentation of show
** knitting documentation of showDF
** knitting documentation of sign
** knitting documentation of sin
** knitting documentation of sinh
** knitting documentation of size
** knitting documentation of skewness
** knitting documentation of sort_array
** knitting documentation of soundex
** knitting documentation of spark.addFile
** knitting documentation of spark.als
** knitting documentation of spark.gaussianMixture
** knitting documentation of spark.gbt
** knitting documentation of spark.getSparkFiles
** knitting documentation of spark.getSparkFilesRootDirectory
** knitting documentation of spark.glm
** knitting documentation of spark.isoreg
** knitting documentation of spark.kmeans
** knitting documentation of spark.kstest
** knitting documentation of spark.lapply
** knitting documentation of spark.lda
** knitting documentation of spark.logit
** knitting documentation of spark.mlp
** knitting documentation of spark.naiveBayes
** knitting documentation of spark.randomForest
** knitting documentation of spark.survreg
** knitting documentation of sparkR.callJMethod
** knitting documentation of sparkR.callJStatic
** knitting documentation of sparkR.conf
** knitting documentation of sparkR.init-deprecated
** knitting documentation of sparkR.newJObject
** knitting documentation of sparkR.session
** knitting documentation of sparkR.session.stop
no examples found for sparkR.session.stop
** knitting documentation of sparkR.uiWebUrl
** knitting documentation of sparkR.version
** knitting documentation of sparkRHive.init-deprecated
** knitting documentation of sparkRSQL.init-deprecated
** knitting documentation of spark_partition_id
** knitting documentation of sql
** knitting documentation of sqrt
** knitting documentation of startsWith
no examples found for startsWith
** knitting documentation of stddev_pop
** knitting documentation of stddev_samp
** knitting documentation of storageLevel
** knitting documentation of str
** knitting documentation of struct
** knitting documentation of structField
** knitting documentation of structType
** knitting documentation of subset
** knitting documentation of substr
no examples found for substr
** knitting documentation of substring_index
** knitting documentation of sum
** knitting documentation of sumDistinct
** knitting documentation of summarize
** knitting documentation of summary
** knitting documentation of tableNames
** knitting documentation of tableToDF
** knitting documentation of tables
** knitting documentation of take
** knitting documentation of tan
** knitting documentation of tanh
** knitting documentation of toDegrees
** knitting documentation of toRadians
** knitting documentation of to_date
** knitting documentation of to_utc_timestamp
** knitting documentation of translate
** knitting documentation of trim
** knitting documentation of unbase64
** knitting documentation of uncacheTable
** knitting documentation of unhex
** knitting documentation of union
** knitting documentation of unix_timestamp
** knitting documentation of unpersist
** knitting documentation of upper
** knitting documentation of var
** knitting documentation of var_pop
** knitting documentation of var_samp
** knitting documentation of weekofyear
** knitting documentation of when
** knitting documentation of window
** knitting documentation of windowOrderBy
** knitting documentation of windowPartitionBy
** knitting documentation of with
** knitting documentation of withColumn
** knitting documentation of write.df
** knitting documentation of write.jdbc
** knitting documentation of write.json
** knitting documentation of write.ml
no examples found for write.ml
** knitting documentation of write.orc
** knitting documentation of write.parquet
** knitting documentation of write.text
** knitting documentation of year
/home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2/R /home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2/R
/home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2/R
* checking for file '/home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2/R/pkg/DESCRIPTION' ... OK
* preparing 'SparkR':
* checking DESCRIPTION meta-information ... OK
* installing the package to build vignettes
* creating vignettes ... OK
* checking for LF line-endings in source and make files
* checking for empty or unneeded directories
* building 'SparkR_2.1.4.tar.gz'

Running CRAN check with --as-cran --no-tests --no-manual --no-vignettes options
* using log directory '/home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2/R/SparkR.Rcheck'
* using R version 3.1.1 (2014-07-10)
* using platform: x86_64-redhat-linux-gnu (64-bit)
* using session charset: ASCII
* using options '--no-tests --no-vignettes'
* checking for file 'SparkR/DESCRIPTION' ... OK
* checking extension type ... Package
* this is package 'SparkR' version '2.1.4'
* checking CRAN incoming feasibility ... WARNING
Maintainer: 'Shivaram Venkataraman <shivaram@cs.berkeley.edu>'
New submission
Package was archived on CRAN
Insufficient package version (submitted: 2.1.4, existing: 2.3.0)
Unknown, possibly mis-spelled, fields in DESCRIPTION:
  'RoxygenNote'
CRAN repository db overrides:
  X-CRAN-Comment: Archived on 2018-05-01 as check problems were not
    corrected despite reminders.
  X-CRAN-History: Archived on 2017-10-22 for policy violation.
    Unarchived on 2018-03-03.
* checking package namespace information ... OK
* checking package dependencies ... NOTE
  No repository set, so cyclic dependency check skipped
* checking if this is a source package ... OK
* checking if there is a namespace ... OK
* checking for executable files ... OK
* checking for hidden files and directories ... OK
* checking for portable file names ... OK
* checking for sufficient/correct file permissions ... OK
* checking whether package 'SparkR' can be installed ... OK
* checking installed package size ... OK
* checking package directory ... OK
* checking 'build' directory ... OK
* checking DESCRIPTION meta-information ... OK
* checking top-level files ... OK
* checking for left-over files ... OK
* checking index information ... OK
* checking package subdirectories ... OK
* checking R files for non-ASCII characters ... OK
* checking R files for syntax errors ... OK
* checking whether the package can be loaded ... OK
* checking whether the package can be loaded with stated dependencies ... OK
* checking whether the package can be unloaded cleanly ... OK
* checking whether the namespace can be loaded with stated dependencies ... OK
* checking whether the namespace can be unloaded cleanly ... OK
* checking loading without being on the library search path ... OK
* checking dependencies in R code ... OK
* checking S3 generic/method consistency ... OK
* checking replacement functions ... OK
* checking foreign function calls ... OK
* checking R code for possible problems ... NOTE
Found the following calls to attach():
File 'SparkR/R/DataFrame.R':
  attach(newEnv, pos = pos, name = name, warn.conflicts = warn.conflicts)
See section 'Good practice' in '?attach'.
* checking Rd files ... OK
* checking Rd metadata ... OK
* checking Rd line widths ... OK
* checking Rd cross-references ... OK
* checking for missing documentation entries ... OK
* checking for code/documentation mismatches ... OK
* checking Rd \usage sections ... OK
* checking Rd contents ... OK
* checking for unstated dependencies in examples ... OK
* checking installed files from 'inst/doc' ... OK
* checking files in 'vignettes' ... OK
* checking examples ... OK
* checking for unstated dependencies in tests ... OK
* checking tests ... SKIPPED
* checking for unstated dependencies in vignettes ... OK
* checking package vignettes in 'inst/doc' ... OK
* checking running R code from vignettes ... SKIPPED
* checking re-building of vignette outputs ... SKIPPED

WARNING: There was 1 warning.
NOTE: There were 2 notes.
See
  '/home/jenkins/workspace/spark-branch-2.1-test-sbt-hadoop-2.2/R/SparkR.Rcheck/00check.log'
for details.

Tests passed.
Archiving artifacts
Recording test results
Finished: SUCCESS