SuccessConsole Output

Skipping 7,327 KB.. Full Log
EBpkDgMEMDIxMFQUlDDI2RQXJOYpFJdU5qTaKoEttlJQNjBwdjEwsFayAwAsE8VZpQAAAA==- unresolved attribute 1 (9 milliseconds)
[info] - unresolved attribute 2 (11 milliseconds)
[info] - unresolved attribute 3 (9 milliseconds)
[info] - unresolved attribute 4 (9 milliseconds)
[info] - unresolved attribute 5 (10 milliseconds)
[info] - unresolved attribute 6 (13 milliseconds)
[info] - unresolved attribute 7 (16 milliseconds)
[info] - multi-char unresolved attribute (14 milliseconds)
[info] - unresolved attribute group by (11 milliseconds)
[info] - unresolved attribute order by (15 milliseconds)
[info] - unresolved attribute where (13 milliseconds)
[info] - unresolved attribute backticks (11 milliseconds)
[info] - parse error (4 milliseconds)
[info] - bad relation (5 milliseconds)
[info] - other expressions !!! IGNORED !!!
[info] HiveQuerySuite:
[info] - SPARK-4908: concurrent hive native commands (1 second, 245 milliseconds)
[info] - SPARK-10484 Optimize the Cartesian (Cross) Join with broadcast based JOIN #1 (1 second, 520 milliseconds)
[info] - SPARK-10484 Optimize the Cartesian (Cross) Join with broadcast based JOIN #2 (887 milliseconds)
[info] - SPARK-10484 Optimize the Cartesian (Cross) Join with broadcast based JOIN #3 (883 milliseconds)
[info] - SPARK-10484 Optimize the Cartesian (Cross) Join with broadcast based JOIN #4 (951 milliseconds)
[info] - SPARK-10484 Optimize the Cartesian (Cross) Join with broadcast based JOIN (87 milliseconds)
[info] - SPARK-8976 Wrong Result for Rollup #1 (483 milliseconds)
[info] - SPARK-8976 Wrong Result for Rollup #2 (549 milliseconds)
[info] - SPARK-8976 Wrong Result for Rollup #3 (475 milliseconds)
[info] - SPARK-8976 Wrong Result for CUBE #1 (410 milliseconds)
[info] - SPARK-8976 Wrong Result for CUBE #2 (465 milliseconds)
[info] - SPARK-8976 Wrong Result for GroupingSet (474 milliseconds)
[info] - insert table with generator with column name (681 milliseconds)
[info] - insert table with generator with multiple column names (883 milliseconds)
[info] - insert table with generator without column name (925 milliseconds)
[info] - multiple generators in projection (49 milliseconds)
[info] - ! operator (140 milliseconds)
[info] - constant object inspector for generic udf (259 milliseconds)
[info] - NaN to Decimal (253 milliseconds)
[info] - constant null testing (573 milliseconds)
[info] - constant null testing timestamp (21 milliseconds)
[info] - constant array (530 milliseconds)
[info] - null case (282 milliseconds)
[info] - single case (363 milliseconds)
[info] - double case (251 milliseconds)
[info] - case else null (287 milliseconds)
[info] - having no references (311 milliseconds)
[info] - no from clause (115 milliseconds)
[info] - boolean = number (308 milliseconds)
[info] - CREATE TABLE AS runs once (385 milliseconds)
[info] - between (380 milliseconds)
[info] - div (318 milliseconds)
[info] - division (66 milliseconds)
[info] - modulus (284 milliseconds)
[info] - Query expressed in SQL (17 milliseconds)
[info] - Query expressed in HiveQL (33 milliseconds)
[info] - Query with constant folding the CAST (39 milliseconds)
[info] - Constant Folding Optimization for AVG_SUM_COUNT (853 milliseconds)
[info] - Cast Timestamp to Timestamp in UDF (343 milliseconds)
[info] - Date comparison test 1 (381 milliseconds)
[info] - Simple Average (548 milliseconds)
[info] - Simple Average + 1 (308 milliseconds)
[info] - Simple Average + 1 with group (309 milliseconds)
[info] - string literal (351 milliseconds)
[info] - Escape sequences (267 milliseconds)
[info] - IgnoreExplain (197 milliseconds)
[info] - trivial join where clause (315 milliseconds)
[info] - trivial join ON clause (339 milliseconds)
[info] - small.cartesian (389 milliseconds)
[info] - length.udf (260 milliseconds)
[info] - partitioned table scan (1 second, 911 milliseconds)
[info] - hash (468 milliseconds)
[info] - create table as (543 milliseconds)
12:36:15.374 WARN org.apache.hadoop.hive.metastore.ObjectStore: Failed to get database testdb, returning NoSuchObjectException
[info] - create table as with db name (1 second, 97 milliseconds)
12:36:16.257 WARN org.apache.hadoop.hive.metastore.ObjectStore: Failed to get database testdb, returning NoSuchObjectException
[info] - create table as with db name within backticks (616 milliseconds)
12:36:16.827 WARN org.apache.hadoop.hive.metastore.ObjectStore: Failed to get database testdb, returning NoSuchObjectException
[info] - insert table with db name (558 milliseconds)
[info] - insert into and insert overwrite (1 second, 58 milliseconds)
[info] - SPARK-7270: consider dynamic partition when comparing table output (263 milliseconds)
[info] - transform (772 milliseconds)
[info] - schema-less transform (535 milliseconds)
[info] - transform with custom field delimiter (435 milliseconds)
[info] - transform with custom field delimiter2 (311 milliseconds)
[info] - transform with custom field delimiter3 (319 milliseconds)
[info] - transform with SerDe (303 milliseconds)
[info] - transform with SerDe2 (324 milliseconds)
[info] - transform with SerDe3 (307 milliseconds)
[info] - transform with SerDe4 (396 milliseconds)
[info] - LIKE (258 milliseconds)
[info] - DISTINCT (465 milliseconds)
[info] - empty aggregate input (359 milliseconds)
[info] - lateral view1 (330 milliseconds)
[info] - lateral view2 (337 milliseconds)
[info] - lateral view3 (501 milliseconds)
[info] - lateral view4 (1 second, 230 milliseconds)
[info] - lateral view5 (538 milliseconds)
[info] - lateral view6 (288 milliseconds)
[info] - Specify the udtf output (243 milliseconds)
[info] - SPARK-9034 Reflect field names defined in GenericUDTF #1 (245 milliseconds)
[info] - SPARK-9034 Reflect field names defined in GenericUDTF #2 (238 milliseconds)
[info] - sampling (18 milliseconds)
[info] - DataFrame toString (7 milliseconds)
[info] - case statements with key #1 (274 milliseconds)
[info] - case statements with key #2 (236 milliseconds)
[info] - case statements with key #3 (339 milliseconds)
[info] - case statements with key #4 (278 milliseconds)
[info] - case statements WITHOUT key #1 (380 milliseconds)
[info] - case statements WITHOUT key #2 (308 milliseconds)
[info] - case statements WITHOUT key #3 (244 milliseconds)
[info] - case statements WITHOUT key #4 (276 milliseconds)
[info] - timestamp cast #1 (42 milliseconds)
[info] - timestamp cast #2 (254 milliseconds)
[info] - timestamp cast #3 (38 milliseconds)
[info] - timestamp cast #4 (250 milliseconds)
[info] - timestamp cast #5 (39 milliseconds)
[info] - timestamp cast #6 (282 milliseconds)
[info] - timestamp cast #7 (36 milliseconds)
[info] - timestamp cast #8 (263 milliseconds)
[info] - select null from table (292 milliseconds)
[info] - CTE feature #1 (261 milliseconds)
[info] - CTE feature #2 (300 milliseconds)
[info] - CTE feature #3 (552 milliseconds)
[info] - get_json_object #1 (1 second, 61 milliseconds)
[info] - get_json_object #2 (982 milliseconds)
[info] - get_json_object #3 (460 milliseconds)
[info] - get_json_object #4 (837 milliseconds)
[info] - get_json_object #5 (636 milliseconds)
[info] - get_json_object #6 (543 milliseconds)
[info] - get_json_object #7 (568 milliseconds)
[info] - get_json_object #8 (821 milliseconds)
[info] - get_json_object #9 (842 milliseconds)
[info] - get_json_object #10 (420 milliseconds)
[info] - predicates contains an empty AttributeSet() references (35 milliseconds)
[info] - implement identity function using case statement (72 milliseconds)
[info] - non-boolean conditions in a CaseWhen are illegal !!! IGNORED !!!
[info] - case sensitivity when query Hive table (261 milliseconds)
[info] - case sensitivity: registered table (45 milliseconds)
[info] - SPARK-1704: Explain commands as a DataFrame (141 milliseconds)
[info] - SPARK-2180: HAVING support in GROUP BY clauses (positive) (142 milliseconds)
[info] - SPARK-2180: HAVING with non-boolean clause raises no exceptions (250 milliseconds)
[info] - SPARK-2225: turn HAVING without GROUP BY into a simple filter (37 milliseconds)
[info] - SPARK-5383 alias for udfs with multi output columns (98 milliseconds)
[info] - SPARK-5367: resolve star expression in udf (176 milliseconds)
12:36:40.674 WARN org.apache.hadoop.hive.metastore.ObjectStore: Failed to get database test_native_commands, returning NoSuchObjectException
12:36:40.720 WARN org.apache.hadoop.hive.metastore.ObjectStore: Failed to get database test_native_commands, returning NoSuchObjectException
[info] - Query Hive native command execution result (257 milliseconds)
[info] - Exactly once semantics for DDL and command statements (103 milliseconds)
[info] - DESCRIBE commands (662 milliseconds)
[info] - SPARK-2263: Insert Map<K, V> values (373 milliseconds)
12:36:42.137 ERROR hive.ql.exec.DDLTask: MetaException(message:java.lang.ClassNotFoundException org.apache.hadoop.hive.serde2.TestSerDe)
	at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:399)
	at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:276)
	at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:258)
	at org.apache.hadoop.hive.ql.exec.DDLTask.alterTableOrSinglePartition(DDLTask.java:3545)
	at org.apache.hadoop.hive.ql.exec.DDLTask.alterTable(DDLTask.java:3367)
	at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:337)
	at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
	at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)
	at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1653)
	at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1412)
	at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1195)
	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)
	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
	at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:495)
	at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:484)
	at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$withHiveState$1.apply(ClientWrapper.scala:290)
	at org.apache.spark.sql.hive.client.ClientWrapper.liftedTree1$1(ClientWrapper.scala:237)
	at org.apache.spark.sql.hive.client.ClientWrapper.retryLocked(ClientWrapper.scala:236)
	at org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:279)
	at org.apache.spark.sql.hive.client.ClientWrapper.runHive(ClientWrapper.scala:484)
	at org.apache.spark.sql.hive.client.ClientWrapper.runSqlHive(ClientWrapper.scala:474)
	at org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:605)
	at org.apache.spark.sql.hive.test.TestHiveContext.runSqlHive(TestHive.scala:114)
	at org.apache.spark.sql.hive.execution.HiveNativeCommand.run(HiveNativeCommand.scala:33)
	at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
	at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
	at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
	at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:145)
	at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:130)
	at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52)
	at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817)
	at org.apache.spark.sql.hive.execution.HiveQuerySuite$$anonfun$34$$anonfun$apply$mcV$sp$13.apply(HiveQuerySuite.scala:950)
	at org.apache.spark.sql.hive.execution.HiveQuerySuite$$anonfun$34$$anonfun$apply$mcV$sp$13.apply(HiveQuerySuite.scala:950)
	at org.scalatest.Assertions$class.intercept(Assertions.scala:997)
	at org.scalatest.FunSuite.intercept(FunSuite.scala:1555)
	at org.apache.spark.sql.hive.execution.HiveQuerySuite$$anonfun$34.apply$mcV$sp(HiveQuerySuite.scala:949)
	at org.apache.spark.sql.hive.execution.HiveQuerySuite$$anonfun$34.apply(HiveQuerySuite.scala:946)
	at org.apache.spark.sql.hive.execution.HiveQuerySuite$$anonfun$34.apply(HiveQuerySuite.scala:946)
	at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
	at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:42)
	at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
	at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
	at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
	at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
	at org.apache.spark.sql.hive.execution.HiveQuerySuite.org$scalatest$BeforeAndAfter$$super$runTest(HiveQuerySuite.scala:44)
	at org.scalatest.BeforeAndAfter$class.runTest(BeforeAndAfter.scala:200)
	at org.apache.spark.sql.hive.execution.HiveQuerySuite.runTest(HiveQuerySuite.scala:44)
	at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
	at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
	at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
	at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
	at scala.collection.immutable.List.foreach(List.scala:318)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
	at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
	at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
	at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
	at org.scalatest.Suite$class.run(Suite.scala:1424)
	at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
	at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
	at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
	at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
	at org.apache.spark.sql.hive.execution.HiveComparisonTest.org$scalatest$BeforeAndAfterAll$$super$run(HiveComparisonTest.scala:44)
	at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
	at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
	at org.apache.spark.sql.hive.execution.HiveQuerySuite.org$scalatest$BeforeAndAfter$$super$run(HiveQuerySuite.scala:44)
	at org.scalatest.BeforeAndAfter$class.run(BeforeAndAfter.scala:241)
	at org.apache.spark.sql.hive.execution.HiveQuerySuite.run(HiveQuerySuite.scala:44)
	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
	at sbt.ForkMain$Run$2.call(ForkMain.java:294)
	at sbt.ForkMain$Run$2.call(ForkMain.java:284)
	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)

12:36:42.139 ERROR org.apache.hadoop.hive.ql.Driver: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.ClassNotFoundException org.apache.hadoop.hive.serde2.TestSerDe
12:36:42.140 ERROR org.apache.spark.sql.hive.client.ClientWrapper: 
======================
HIVE FAILURE OUTPUT
======================
gress.interval=60000
SET hive.server2.thrift.http.worker.keepalive.time=60
SET hive.compactor.check.interval=300
SET hive.txn.timeout=300
SET javax.jdo.option.ConnectionUserName=APP
SET hive.server2.thrift.login.timeout=20
SET hive.metastore.event.clean.freq=0
SET hive.metastore.client.connect.retry.delay=1
SET hive.server2.session.check.interval=21600000
SET datanucleus.storeManagerType=rdbms
SET hive.spark.job.monitor.timeout=60
SET hive.metastore.event.expiry.duration=0
SET hive.compactor.worker.timeout=86400
SET hive.zookeeper.connection.basesleeptime=1000
SET hive.metastore.client.socket.lifetime=0
SET hive.server2.async.exec.shutdown.timeout=10
SET hive.server2.thrift.worker.keepalive.time=60
SET datanucleus.autoStartMechanismMode=checked
SET hive.metastore.rawstore.impl=org.apache.hadoop.hive.metastore.ObjectStore
SET hive.server2.thrift.http.cookie.max.age=86400
SET hive.metastore.aggregate.stats.cache.max.reader.wait=1000
SET datanucleus.cache.level2.type=none
SET datanucleus.fixedDatastore=false
SET javax.jdo.option.ConnectionURL=jdbc:derby:;databaseName=/home/jenkins/workspace/spark-branch-1.6-test-sbt-hadoop-1.0/target/tmp/spark-d9ff8ef1-3205-417d-8168-f2a0818c55b7/metastore;create=true
SET hive.auto.progress.timeout=0
SET hive.server2.thrift.http.max.idle.time=1800000
SET hive.lock.sleep.between.retries=60
SET javax.jdo.option.ConnectionDriverName=org.apache.derby.jdbc.EmbeddedDriver
SET datanucleus.plugin.pluginRegistryBundleCheck=LOG
SET hive.metastore.event.listeners=
SET javax.jdo.option.ConnectionPassword=mine
SET datanucleus.validateConstraints=false
SET hive.stats.retries.wait=3000
SET hive.zookeeper.session.timeout=1200000
SET hive.spark.client.server.connect.timeout=90000
SET datanucleus.autoCreateSchema=true
SET hive.server2.idle.session.timeout=604800000
SET hive.exec.scratchdir=file:/home/jenkins/workspace/spark-branch-1.6-test-sbt-hadoop-1.0/target/tmp/scratch--2657d609-1797-41c7-8fdd-83c54acdd7c8/
SET javax.jdo.option.Multithreaded=true
SET hive.metastore.client.socket.timeout=600
SET hive.server2.thrift.exponential.backoff.slot.length=100
SET hive.support.sql11.reserved.keywords=false
OK
RESET 
set hive.table.parameters.default=
set datanucleus.cache.collections=true
set datanucleus.cache.collections.lazy=true
set hive.metastore.partition.name.whitelist.pattern=.*
SET datanucleus.rdbms.datastoreAdapterClassName=org.datanucleus.store.rdbms.adapter.DerbyAdapter
SET datanucleus.identifierFactory=datanucleus1
SET hive.localize.resource.wait.interval=5000
SET javax.jdo.option.DetachAllOnCommit=true
SET hive.spark.client.connect.timeout=1000
SET hive.hmshandler.retry.interval=2000
SET javax.jdo.option.NonTransactionalRead=true
SET datanucleus.transactionIsolation=read-committed
SET datanucleus.cache.level2=false
SET datanucleus.validateColumns=false
SET hive.metastore.pre.event.listeners=
SET hive.spark.client.future.timeout=60
SET javax.jdo.PersistenceManagerFactoryClass=org.datanucleus.api.jdo.JDOPersistenceManagerFactory
SET hive.metastore.integral.jdo.pushdown=true
SET hive.metastore.warehouse.dir=file:/home/jenkins/workspace/spark-branch-1.6-test-sbt-hadoop-1.0/target/tmp/warehouse--d15f682e-e74f-41e7-8b21-54344e76e403/
SET hive.metastore.aggregate.stats.cache.ttl=600
SET hive.metastore.aggregate.stats.cache.max.writer.wait=5000
SET hive.metastore.uris=
SET datanucleus.validateTables=false
SET hive.server.read.socket.timeout=10
SET datanucleus.rdbms.useLegacyNativeValueStrategy=true
SET hive.server2.idle.operation.timeout=432000000
SET hive.stats.jdbc.timeout=30
SET hive.compactor.cleaner.run.interval=5000
SET hive.metastore.event.db.listener.timetolive=86400
SET hive.server2.async.exec.keepalive.time=10
SET hive.server2.long.polling.timeout=5000
SET hive.metastore.end.function.listeners=
SET datanucleus.connectionPoolingType=BONECP
SET hive.querylog.plan.progress.interval=60000
SET hive.server2.thrift.http.worker.keepalive.time=60
SET hive.compactor.check.interval=300
SET hive.txn.timeout=300
SET javax.jdo.option.ConnectionUserName=APP
SET hive.server2.thrift.login.timeout=20
SET hive.metastore.event.clean.freq=0
SET hive.metastore.client.connect.retry.delay=1
SET hive.server2.session.check.interval=21600000
SET datanucleus.storeManagerType=rdbms
SET hive.spark.job.monitor.timeout=60
SET hive.metastore.event.expiry.duration=0
SET hive.compactor.worker.timeout=86400
SET hive.zookeeper.connection.basesleeptime=1000
SET hive.metastore.client.socket.lifetime=0
SET hive.server2.async.exec.shutdown.timeout=10
SET hive.server2.thrift.worker.keepalive.time=60
SET datanucleus.autoStartMechanismMode=checked
SET hive.metastore.rawstore.impl=org.apache.hadoop.hive.metastore.ObjectStore
SET hive.server2.thrift.http.cookie.max.age=86400
SET hive.metastore.aggregate.stats.cache.max.reader.wait=1000
SET datanucleus.cache.level2.type=none
SET datanucleus.fixedDatastore=false
SET javax.jdo.option.ConnectionURL=jdbc:derby:;databaseName=/home/jenkins/workspace/spark-branch-1.6-test-sbt-hadoop-1.0/target/tmp/spark-d9ff8ef1-3205-417d-8168-f2a0818c55b7/metastore;create=true
SET hive.auto.progress.timeout=0
SET hive.server2.thrift.http.max.idle.time=1800000
SET hive.lock.sleep.between.retries=60
SET javax.jdo.option.ConnectionDriverName=org.apache.derby.jdbc.EmbeddedDriver
SET datanucleus.plugin.pluginRegistryBundleCheck=LOG
SET hive.metastore.event.listeners=
SET javax.jdo.option.ConnectionPassword=mine
SET datanucleus.validateConstraints=false
SET hive.stats.retries.wait=3000
SET hive.zookeeper.session.timeout=1200000
SET hive.spark.client.server.connect.timeout=90000
SET datanucleus.autoCreateSchema=true
SET hive.server2.idle.session.timeout=604800000
SET hive.exec.scratchdir=file:/home/jenkins/workspace/spark-branch-1.6-test-sbt-hadoop-1.0/target/tmp/scratch--2657d609-1797-41c7-8fdd-83c54acdd7c8/
SET javax.jdo.option.Multithreaded=true
SET hive.metastore.client.socket.timeout=600
SET hive.server2.thrift.exponential.backoff.slot.length=100
SET hive.support.sql11.reserved.keywords=false
OK
OK
Loading data to table default.src
Table default.src stats: [numFiles=1, totalSize=5812]
OK
OK
OK
OK
RESET 
set hive.table.parameters.default=
set datanucleus.cache.collections=true
set datanucleus.cache.collections.lazy=true
set hive.metastore.partition.name.whitelist.pattern=.*
SET datanucleus.rdbms.datastoreAdapterClassName=org.datanucleus.store.rdbms.adapter.DerbyAdapter
SET datanucleus.identifierFactory=datanucleus1
SET hive.localize.resource.wait.interval=5000
SET javax.jdo.option.DetachAllOnCommit=true
SET hive.spark.client.connect.timeout=1000
SET hive.hmshandler.retry.interval=2000
SET javax.jdo.option.NonTransactionalRead=true
SET datanucleus.transactionIsolation=read-committed
SET datanucleus.cache.level2=false
SET datanucleus.validateColumns=false
SET hive.metastore.pre.event.listeners=
SET hive.spark.client.future.timeout=60
SET javax.jdo.PersistenceManagerFactoryClass=org.datanucleus.api.jdo.JDOPersistenceManagerFactory
SET hive.metastore.integral.jdo.pushdown=true
SET hive.metastore.warehouse.dir=file:/home/jenkins/workspace/spark-branch-1.6-test-sbt-hadoop-1.0/target/tmp/warehouse--d15f682e-e74f-41e7-8b21-54344e76e403/
SET hive.metastore.aggregate.stats.cache.ttl=600
SET hive.metastore.aggregate.stats.cache.max.writer.wait=5000
SET hive.metastore.uris=
SET datanucleus.validateTables=false
SET hive.server.read.socket.timeout=10
SET datanucleus.rdbms.useLegacyNativeValueStrategy=true
SET hive.server2.idle.operation.timeout=432000000
SET hive.stats.jdbc.timeout=30
SET hive.compactor.cleaner.run.interval=5000
SET hive.metastore.event.db.listener.timetolive=86400
SET hive.server2.async.exec.keepalive.time=10
SET hive.server2.long.polling.timeout=5000
SET hive.metastore.end.function.listeners=
SET datanucleus.connectionPoolingType=BONECP
SET hive.querylog.plan.progress.interval=60000
SET hive.server2.thrift.http.worker.keepalive.time=60
SET hive.compactor.check.interval=300
SET hive.txn.timeout=300
SET javax.jdo.option.ConnectionUserName=APP
SET hive.server2.thrift.login.timeout=20
SET hive.metastore.event.clean.freq=0
SET hive.metastore.client.connect.retry.delay=1
SET hive.server2.session.check.interval=21600000
SET datanucleus.storeManagerType=rdbms
SET hive.spark.job.monitor.timeout=60
SET hive.metastore.event.expiry.duration=0
SET hive.compactor.worker.timeout=86400
SET hive.zookeeper.connection.basesleeptime=1000
SET hive.metastore.client.socket.lifetime=0
SET hive.server2.async.exec.shutdown.timeout=10
SET hive.server2.thrift.worker.keepalive.time=60
SET datanucleus.autoStartMechanismMode=checked
SET hive.metastore.rawstore.impl=org.apache.hadoop.hive.metastore.ObjectStore
SET hive.server2.thrift.http.cookie.max.age=86400
SET hive.metastore.aggregate.stats.cache.max.reader.wait=1000
SET datanucleus.cache.level2.type=none
SET datanucleus.fixedDatastore=false
SET javax.jdo.option.ConnectionURL=jdbc:derby:;databaseName=/home/jenkins/workspace/spark-branch-1.6-test-sbt-hadoop-1.0/target/tmp/spark-d9ff8ef1-3205-417d-8168-f2a0818c55b7/metastore;create=true
SET hive.auto.progress.timeout=0
SET hive.server2.thrift.http.max.idle.time=1800000
SET hive.lock.sleep.between.retries=60
SET javax.jdo.option.ConnectionDriverName=org.apache.derby.jdbc.EmbeddedDriver
SET datanucleus.plugin.pluginRegistryBundleCheck=LOG
SET hive.metastore.event.listeners=
SET javax.jdo.option.ConnectionPassword=mine
SET datanucleus.validateConstraints=false
SET hive.stats.retries.wait=3000
SET hive.zookeeper.session.timeout=1200000
SET hive.spark.client.server.connect.timeout=90000
SET datanucleus.autoCreateSchema=true
SET hive.server2.idle.session.timeout=604800000
SET hive.exec.scratchdir=file:/home/jenkins/workspace/spark-branch-1.6-test-sbt-hadoop-1.0/target/tmp/scratch--2657d609-1797-41c7-8fdd-83c54acdd7c8/
SET javax.jdo.option.Multithreaded=true
SET hive.metastore.client.socket.timeout=600
SET hive.server2.thrift.exponential.backoff.slot.length=100
SET hive.support.sql11.reserved.keywords=false
OK
OK
OK
OK
Loading data to table default.src
Table default.src stats: [numFiles=1, totalSize=5812]
OK
OK
OK
OK
OK
OK
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.ClassNotFoundException org.apache.hadoop.hive.serde2.TestSerDe

======================
END HIVE FAILURE OUTPUT
======================
          
[info] - ADD JAR command (102 milliseconds)
[info] - ADD JAR command 2 (249 milliseconds)
[info] - CREATE TEMPORARY FUNCTION (128 milliseconds)
[info] - ADD FILE command (76 milliseconds)
[info] - dynamic_partition (1 second, 686 milliseconds)
[info] - Dynamic partition folder layout !!! IGNORED !!!
[info] - SPARK-5592: get java.net.URISyntaxException when dynamic partitioning (952 milliseconds)
[info] - Partition spec validation (286 milliseconds)
[info] - SPARK-3414 regression: should store analyzed logical plan when registering a temp table (67 milliseconds)
[info] - SPARK-3810: PreInsertionCasts static partitioning support (1 second, 472 milliseconds)
[info] - SPARK-3810: PreInsertionCasts dynamic partitioning support (53 seconds, 313 milliseconds)
[info] - parse HQL set commands (11 milliseconds)
[info] - SET commands semantics for a HiveContext (25 milliseconds)
12:37:40.442 WARN org.apache.hadoop.hive.metastore.ObjectStore: Failed to get database a, returning NoSuchObjectException
12:37:41.371 WARN org.apache.hadoop.hive.metastore.ObjectStore: Failed to get database b, returning NoSuchObjectException
[info] - current_database with multiple sessions (4 seconds, 74 milliseconds)
[info] - lookup hive UDF in another thread (53 milliseconds)
[info] - select from thrift based table (3 seconds, 146 milliseconds)
[info] HiveOperatorQueryableSuite:
[info] - SPARK-5324 query result of describe command (51 milliseconds)
[info] ScriptTransformationSuite:
[info] - cat without SerDe (63 milliseconds)
[info] - cat with LazySimpleSerDe (51 milliseconds)
12:37:48.962 ERROR org.apache.spark.executor.Executor: Exception in task 31.0 in stage 9502.0 (TID 102912)
java.lang.IllegalArgumentException: intentional exception
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:119)
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:116)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$class.foreach(Iterator.scala:727)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply$mcV$sp(ScriptTransformation.scala:255)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1817)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread.run(ScriptTransformation.scala:244)
12:37:48.963 ERROR org.apache.spark.executor.Executor: Exception in task 10.0 in stage 9502.0 (TID 102891)
java.lang.IllegalArgumentException: intentional exception
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:119)
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:116)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$class.foreach(Iterator.scala:727)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply$mcV$sp(ScriptTransformation.scala:255)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1817)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread.run(ScriptTransformation.scala:244)
12:37:48.962 ERROR org.apache.spark.executor.Executor: Exception in task 21.0 in stage 9502.0 (TID 102902)
java.lang.IllegalArgumentException: intentional exception
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:119)
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:116)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$class.foreach(Iterator.scala:727)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply$mcV$sp(ScriptTransformation.scala:255)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1817)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread.run(ScriptTransformation.scala:244)
12:37:48.965 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 10.0 in stage 9502.0 (TID 102891, localhost): java.lang.IllegalArgumentException: intentional exception
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:119)
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:116)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$class.foreach(Iterator.scala:727)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply$mcV$sp(ScriptTransformation.scala:255)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1817)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread.run(ScriptTransformation.scala:244)

12:37:48.966 ERROR org.apache.spark.scheduler.TaskSetManager: Task 10 in stage 9502.0 failed 1 times; aborting job
12:37:48.968 ERROR org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread: 
12:37:48.968 ERROR org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread: 
12:37:48.968 ERROR org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread: 
[info] - script transformation should not swallow errors from upstream operators (no serde) (1 second, 75 milliseconds)
12:37:48.969 ERROR org.apache.spark.util.Utils: Uncaught exception in thread Thread-ScriptTransformation-Feed
java.lang.IllegalArgumentException: intentional exception
Exception in thread "Thread-ScriptTransformation-Feed" java.lang.IllegalArgumentException: intentional exception
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:119)
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:116)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$class.foreach(Iterator.scala:727)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply$mcV$sp(ScriptTransformation.scala:255)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:119)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1817)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread.run(ScriptTransformation.scala:244)
Exception in thread "Thread-ScriptTransformation-Feed" java.lang.IllegalArgumentException: intentional exception
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:119)
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:116)
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:116)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$class.foreach(Iterator.scala:727)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply$mcV$sp(ScriptTransformation.scala:255)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1817)	at scala.collection.Iterator$class.foreach(Iterator.scala:727)

	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread.run(ScriptTransformation.scala:244)
Exception in thread "Thread-ScriptTransformation-Feed" 	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
java.lang.IllegalArgumentException: intentional exception
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:119)
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:116)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$class.foreach(Iterator.scala:727)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply$mcV$sp(ScriptTransformation.scala:255)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1817)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread.run(ScriptTransformation.scala:244)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply$mcV$sp(ScriptTransformation.scala:255)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1817)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread.run(ScriptTransformation.scala:244)
12:37:48.969 ERROR org.apache.spark.util.Utils: Uncaught exception in thread Thread-ScriptTransformation-Feed
java.lang.IllegalArgumentException: intentional exception
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:119)
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:116)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$class.foreach(Iterator.scala:727)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply$mcV$sp(ScriptTransformation.scala:255)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1817)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread.run(ScriptTransformation.scala:244)
12:37:48.969 ERROR org.apache.spark.util.Utils: Uncaught exception in thread Thread-ScriptTransformation-Feed
java.lang.IllegalArgumentException: intentional exception
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:119)
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:116)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$class.foreach(Iterator.scala:727)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply$mcV$sp(ScriptTransformation.scala:255)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1817)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread.run(ScriptTransformation.scala:244)
12:37:50.017 ERROR org.apache.spark.executor.Executor: Exception in task 21.0 in stage 9503.0 (TID 102934)
java.lang.IllegalArgumentException: intentional exception
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:119)
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:116)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$class.foreach(Iterator.scala:727)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply$mcV$sp(ScriptTransformation.scala:255)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1817)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread.run(ScriptTransformation.scala:244)
12:37:50.018 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 21.0 in stage 9503.0 (TID 102934, localhost): java.lang.IllegalArgumentException: intentional exception
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:119)
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:116)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$class.foreach(Iterator.scala:727)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply$mcV$sp(ScriptTransformation.scala:255)
Exception in thread "Thread-ScriptTransformation-Feed" java.lang.IllegalArgumentException: intentional exception
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:119)
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:116)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$class.foreach(Iterator.scala:727)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply$mcV$sp(ScriptTransformation.scala:255)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1817)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread.run(ScriptTransformation.scala:244)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1817)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread.run(ScriptTransformation.scala:244)

12:37:50.019 ERROR org.apache.spark.scheduler.TaskSetManager: Task 21 in stage 9503.0 failed 1 times; aborting job
12:37:50.019 ERROR org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread: 
12:37:50.019 ERROR org.apache.spark.util.Utils: Uncaught exception in thread Thread-ScriptTransformation-Feed
java.lang.IllegalArgumentException: intentional exception
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:119)
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:116)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$class.foreach(Iterator.scala:727)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply$mcV$sp(ScriptTransformation.scala:255)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1817)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread.run(ScriptTransformation.scala:244)
[info] - script transformation should not swallow errors from upstream operators (with serde) (1 second, 54 milliseconds)
12:37:50.033 ERROR org.apache.spark.executor.Executor: Exception in task 10.0 in stage 9503.0 (TID 102923)
java.lang.IllegalArgumentException: intentional exception
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:119)
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:116)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$class.foreach(Iterator.scala:727)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply$mcV$sp(ScriptTransformation.scala:255)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1817)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread.run(ScriptTransformation.scala:244)
12:37:50.034 ERROR org.apache.spark.executor.Executor: Exception in task 31.0 in stage 9503.0 (TID 102944)
java.lang.IllegalArgumentException: intentional exception
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:119)
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:116)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$class.foreach(Iterator.scala:727)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply$mcV$sp(ScriptTransformation.scala:255)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1817)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread.run(ScriptTransformation.scala:244)
12:37:50.037 ERROR org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread: 
Exception in thread "Thread-ScriptTransformation-Feed" java.lang.IllegalArgumentException: intentional exception
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:119)
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:116)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$class.foreach(Iterator.scala:727)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply$mcV$sp(ScriptTransformation.scala:255)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1817)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread.run(ScriptTransformation.scala:244)
12:37:50.037 ERROR org.apache.spark.util.Utils: Uncaught exception in thread Thread-ScriptTransformation-Feed
java.lang.IllegalArgumentException: intentional exception
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:119)
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:116)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$class.foreach(Iterator.scala:727)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply$mcV$sp(ScriptTransformation.scala:255)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1817)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread.run(ScriptTransformation.scala:244)
Exception in thread "Thread-ScriptTransformation-Feed" 12:37:50.038 ERROR org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread: 
java.lang.IllegalArgumentException: intentional exception
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:119)
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:116)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$class.foreach(Iterator.scala:727)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply$mcV$sp(ScriptTransformation.scala:255)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1817)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread.run(ScriptTransformation.scala:244)
12:37:50.038 ERROR org.apache.spark.util.Utils: Uncaught exception in thread Thread-ScriptTransformation-Feed
java.lang.IllegalArgumentException: intentional exception
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:119)
	at org.apache.spark.sql.hive.execution.ExceptionInjectingOperator$$anonfun$doExecute$1.apply(ScriptTransformationSuite.scala:116)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$class.foreach(Iterator.scala:727)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply$mcV$sp(ScriptTransformation.scala:255)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread$$anonfun$run$1.apply(ScriptTransformation.scala:244)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1817)
	at org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread.run(ScriptTransformation.scala:244)
[info] Test run started
[info] Test org.apache.spark.sql.hive.JavaMetastoreDataSourcesSuite.saveExternalTableAndQueryIt started
12:37:50.459 WARN org.apache.spark.sql.hive.HiveContext$$anon$2: Couldn't find corresponding Hive SerDe for data source provider org.apache.spark.sql.json. Persisting data source relation `javaSavedTable` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
12:37:50.594 WARN org.apache.spark.sql.hive.HiveContext$$anon$2: Couldn't find corresponding Hive SerDe for data source provider org.apache.spark.sql.json. Persisting data source relation `externalTable` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] Test org.apache.spark.sql.hive.JavaMetastoreDataSourcesSuite.saveTableAndQueryIt started
12:37:51.590 WARN org.apache.spark.sql.hive.HiveContext$$anon$2: Couldn't find corresponding Hive SerDe for data source provider org.apache.spark.sql.json. Persisting data source relation `javaSavedTable` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] Test org.apache.spark.sql.hive.JavaMetastoreDataSourcesSuite.saveExternalTableWithSchemaAndQueryIt started
12:37:52.035 WARN org.apache.spark.sql.hive.HiveContext$$anon$2: Couldn't find corresponding Hive SerDe for data source provider org.apache.spark.sql.json. Persisting data source relation `javaSavedTable` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
12:37:52.218 WARN org.apache.spark.sql.hive.HiveContext$$anon$2: Couldn't find corresponding Hive SerDe for data source provider org.apache.spark.sql.json. Persisting data source relation `externalTable` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] Test run finished: 0 failed, 0 ignored, 3 total, 2.387s
[info] Test run started
[info] Test org.apache.spark.sql.hive.JavaDataFrameSuite.testUDAF started
[info] Test org.apache.spark.sql.hive.JavaDataFrameSuite.saveTableAndQueryIt started
[info] Test run finished: 0 failed, 0 ignored, 2 total, 1.089s
[info] ScalaTest
[info] Run completed in 1 hour, 19 minutes, 17 seconds.
[info] Total number of tests run: 1734
[info] Suites: completed 54, aborted 0
[info] Tests: succeeded 1734, failed 0, canceled 0, ignored 594, pending 0
[info] All tests passed.
[info] Passed: Total 1739, Failed 0, Errors 0, Passed 1739, Ignored 594
[success] Total time: 5271 s, completed Jun 19, 2017 12:37:55 PM

========================================================================
Running PySpark tests
========================================================================
Running PySpark tests. Output is in /home/jenkins/workspace/spark-branch-1.6-test-sbt-hadoop-1.0/python/unit-tests.log
Will test against the following Python executables: ['python2.6', 'python3.4', 'pypy']
Will test the following Python modules: ['pyspark-core', 'pyspark-ml', 'pyspark-mllib', 'pyspark-sql', 'pyspark-streaming']
Finished test(python2.6): pyspark.broadcast (4s)
Finished test(python2.6): pyspark.conf (5s)
Finished test(python2.6): pyspark.accumulators (7s)
Finished test(python2.6): pyspark.serializers (8s)
Finished test(python2.6): pyspark.shuffle (0s)
Finished test(python2.6): pyspark.profiler (5s)
Finished test(python2.6): pyspark.context (21s)
Finished test(python2.6): pyspark.rdd (25s)
Finished test(python2.6): pyspark.ml.clustering (9s)
Finished test(python2.6): pyspark.ml.feature (19s)
Finished test(python2.6): pyspark.ml.classification (18s)
Finished test(python2.6): pyspark.ml.recommendation (12s)
Finished test(python2.6): pyspark.ml.tuning (12s)
Finished test(python2.6): pyspark.ml.regression (15s)
Finished test(python2.6): pyspark.ml.evaluation (9s)
Finished test(python2.6): pyspark.ml.tests (14s)
Finished test(python2.6): pyspark.mllib.classification (16s)
Finished test(python2.6): pyspark.mllib.evaluation (10s)
Finished test(python2.6): pyspark.mllib.fpm (6s)
Finished test(python2.6): pyspark.mllib.linalg.__init__ (0s)
Finished test(python2.6): pyspark.mllib.feature (11s)
Finished test(python2.6): pyspark.mllib.random (7s)
Finished test(python2.6): pyspark.mllib.clustering (28s)
Finished test(python2.6): pyspark.mllib.linalg.distributed (19s)
Finished test(python2.6): pyspark.mllib.regression (16s)
Finished test(python2.6): pyspark.mllib.stat.KernelDensity (0s)
Finished test(python2.6): pyspark.mllib.recommendation (19s)
Finished test(python2.6): pyspark.mllib.stat._statistics (10s)
Finished test(python2.6): pyspark.mllib.util (6s)
Finished test(python2.6): pyspark.sql.types (5s)
Finished test(python2.6): pyspark.mllib.tree (13s)
Finished test(python2.6): pyspark.sql.column (11s)
Finished test(python2.6): pyspark.sql.context (16s)
Finished test(python2.6): pyspark.tests (122s)
Finished test(python2.6): pyspark.sql.group (14s)
Finished test(python2.6): pyspark.sql.functions (21s)
Finished test(python2.6): pyspark.sql.dataframe (27s)
Finished test(python2.6): pyspark.sql.window (3s)
Finished test(python2.6): pyspark.streaming.util (0s)
Finished test(python2.6): pyspark.sql.readwriter (70s)
Finished test(python2.6): pyspark.mllib.tests (138s)
Finished test(python3.4): pyspark.rdd (32s)
Finished test(python3.4): pyspark.conf (4s)
Finished test(python3.4): pyspark.broadcast (5s)
Finished test(python3.4): pyspark.context (17s)
Finished test(python3.4): pyspark.accumulators (5s)
Finished test(python3.4): pyspark.serializers (8s)
Finished test(python3.4): pyspark.shuffle (2s)
Finished test(python3.4): pyspark.profiler (9s)
Finished test(python3.4): pyspark.ml.feature (24s)
Finished test(python2.6): pyspark.sql.tests (156s)
Finished test(python3.4): pyspark.ml.classification (23s)
Finished test(python3.4): pyspark.ml.clustering (11s)
Finished test(python3.4): pyspark.ml.recommendation (17s)
Finished test(python3.4): pyspark.ml.regression (21s)
Finished test(python3.4): pyspark.ml.tuning (18s)
Finished test(python3.4): pyspark.ml.tests (25s)
Finished test(python3.4): pyspark.ml.evaluation (13s)
Finished test(python3.4): pyspark.mllib.classification (19s)
Finished test(python3.4): pyspark.mllib.evaluation (12s)
Finished test(python3.4): pyspark.mllib.clustering (33s)
Finished test(python3.4): pyspark.mllib.feature (15s)
Finished test(python3.4): pyspark.mllib.fpm (10s)
Finished test(python3.4): pyspark.mllib.linalg.__init__ (0s)
Finished test(python3.4): pyspark.mllib.random (8s)
Finished test(python3.4): pyspark.mllib.linalg.distributed (18s)
Finished test(python3.4): pyspark.mllib.recommendation (19s)
Finished test(python3.4): pyspark.tests (174s)
Finished test(python3.4): pyspark.mllib.stat.KernelDensity (0s)
Finished test(python3.4): pyspark.mllib.regression (19s)
Finished test(python3.4): pyspark.mllib.stat._statistics (11s)
Finished test(python3.4): pyspark.mllib.tree (14s)
Finished test(python3.4): pyspark.mllib.util (7s)
Finished test(python3.4): pyspark.sql.types (5s)
Finished test(python3.4): pyspark.sql.context (13s)
Finished test(python3.4): pyspark.sql.column (11s)
Finished test(python3.4): pyspark.sql.group (21s)
Finished test(python3.4): pyspark.sql.dataframe (36s)
Finished test(python2.6): pyspark.streaming.tests (357s)
Finished test(python3.4): pyspark.sql.window (4s)
Finished test(python3.4): pyspark.sql.functions (20s)
Finished test(python3.4): pyspark.streaming.util (0s)
Finished test(python3.4): pyspark.sql.readwriter (51s)
Finished test(pypy): pyspark.rdd (29s)
Finished test(python3.4): pyspark.mllib.tests (142s)
Finished test(pypy): pyspark.conf (6s)
Finished test(pypy): pyspark.broadcast (5s)
Finished test(pypy): pyspark.context (19s)
Finished test(pypy): pyspark.accumulators (5s)
Finished test(pypy): pyspark.profiler (6s)
Finished test(pypy): pyspark.serializers (8s)
Finished test(pypy): pyspark.shuffle (1s)
Finished test(pypy): pyspark.sql.types (5s)
Finished test(python3.4): pyspark.sql.tests (122s)
Finished test(pypy): pyspark.sql.context (14s)
Finished test(pypy): pyspark.sql.column (13s)
Finished test(pypy): pyspark.sql.group (16s)
Finished test(pypy): pyspark.sql.dataframe (36s)
Finished test(pypy): pyspark.sql.functions (23s)
Finished test(pypy): pyspark.sql.window (3s)
Finished test(pypy): pyspark.tests (180s)
Finished test(pypy): pyspark.streaming.util (0s)
Finished test(python3.4): pyspark.streaming.tests (312s)
Finished test(pypy): pyspark.sql.readwriter (173s)
Finished test(pypy): pyspark.sql.tests (236s)
Finished test(pypy): pyspark.streaming.tests (367s)
Tests passed in 1177 seconds

========================================================================
Running SparkR tests
========================================================================
Loading required package: methods

Attaching package: 'SparkR'

The following object is masked from 'package:testthat':

    describe

The following objects are masked from 'package:stats':

    cov, filter, lag, na.omit, predict, sd, var

The following objects are masked from 'package:base':

    colnames, colnames<-, intersect, rank, rbind, sample, subset,
    summary, table, transform

SerDe functionality: ...................
functions on binary files: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context
....
binary functions: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context
...........
broadcast variables: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context
..
functions in client.R: .....
test functions in sparkR.R: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context
........Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context
..........
include an external JAR in SparkContext: ..
include R packages: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context

MLlib functions: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context
............
parallelize() and collect(): Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context
.............................
basic RDD functions: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context
...........................................................................................................................................................................................................................................................................................................................................................................................................................................
partitionBy, groupByKey, reduceByKey etc.: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context
....................
SparkSQL functions: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context
........................................................................................................................................................SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
............................................................................................................................................................................................................................................................................................................................................................................................................................................................................
tests RDD function take(): Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context
................
the textFile() function: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context
.............
functions in utils.R: Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context
.......................

DONE ===========================================================================
Tests passed.
Archiving artifacts
Recording test results
Finished: SUCCESS