FailedConsole Output

Skipping 22,355 KB.. Full Log
ivy/per-executor-caches/8/.ivy2/cache/net.sourceforge.cssparser/cssparser/jars/cssparser-0.9.19.jar:/home/sparkivy/per-executor-caches/8/.ivy2/cache/org.w3c.css/sac/jars/sac-1.3.jar:/home/sparkivy/per-executor-caches/8/.ivy2/cache/org.eclipse.jetty.websocket/websocket-client/jars/websocket-client-9.2.17.v20160517.jar:/home/sparkivy/per-executor-caches/8/.ivy2/cache/org.eclipse.jetty.websocket/websocket-common/jars/websocket-common-9.2.17.v20160517.jar:/home/sparkivy/per-executor-caches/8/.ivy2/cache/org.eclipse.jetty.websocket/websocket-api/jars/websocket-api-9.2.17.v20160517.jar:/home/sparkivy/per-executor-caches/8/.ivy2/cache/org.seleniumhq.selenium/selenium-firefox-driver/jars/selenium-firefox-driver-2.52.0.jar:/home/sparkivy/per-executor-caches/8/.ivy2/cache/org.seleniumhq.selenium/selenium-ie-driver/jars/selenium-ie-driver-2.52.0.jar:/home/sparkivy/per-executor-caches/8/.ivy2/cache/org.seleniumhq.selenium/selenium-safari-driver/jars/selenium-safari-driver-2.52.0.jar:/home/sparkivy/per-executor-caches/8/.ivy2/cache/org.webbitserver/webbit/jars/webbit-0.4.14.jar:/home/sparkivy/per-executor-caches/8/.ivy2/cache/org.seleniumhq.selenium/selenium-leg-rc/jars/selenium-leg-rc-2.52.0.jar:/home/sparkivy/per-executor-caches/8/.ivy2/cache/org.hamcrest/hamcrest-library/jars/hamcrest-library-1.3.jar:/home/sparkivy/per-executor-caches/8/.ivy2/cache/org.scalacheck/scalacheck_2.12/jars/scalacheck_2.12-1.14.2.jar:/home/sparkivy/per-executor-caches/8/.ivy2/cache/org.apache.curator/curator-test/jars/curator-test-2.13.0.jar:/home/sparkivy/per-executor-caches/8/.ivy2/cache/org.apache.hadoop/hadoop-minikdc/jars/hadoop-minikdc-3.2.0.jar:/home/sparkivy/per-executor-caches/8/.ivy2/cache/com.h2database/h2/jars/h2-1.4.195.jar:/home/sparkivy/per-executor-caches/8/.ivy2/cache/org.mariadb.jdbc/mariadb-java-client/jars/mariadb-java-client-2.5.4.jar:/home/sparkivy/per-executor-caches/8/.ivy2/cache/org.postgresql/postgresql/bundles/postgresql-42.2.6.jar:/home/sparkivy/per-executor-caches/8/.ivy2/cache/com.ibm.db2/jcc/jars/jcc-11.5.0.0.jar:/home/sparkivy/per-executor-caches/8/.ivy2/cache/org.apache.parquet/parquet-avro/jars/parquet-avro-1.10.1.jar:/home/sparkivy/per-executor-caches/8/.ivy2/cache/it.unimi.dsi/fastutil/jars/fastutil-7.0.13.jar:/home/sparkivy/per-executor-caches/8/.sbt/boot/scala-2.10.7/org.scala-sbt/sbt/0.13.18/test-agent-0.13.18.jar:/home/sparkivy/per-executor-caches/8/.sbt/boot/scala-2.10.7/org.scala-sbt/sbt/0.13.18/test-interface-1.0.jar sbt.ForkMain 36881 failed with exit code 137
[info] ScalaTest
[info] Run completed in 4 hours, 49 seconds.
[info] Total number of tests run: 206
[info] Suites: completed 8, aborted 0
[info] Tests: succeeded 206, failed 0, canceled 0, ignored 0, pending 0
[info] All tests passed.
[info] Passed: Total 207, Failed 0, Errors 0, Passed 207
[info] - parameter accuracy (9 seconds, 679 milliseconds)
[info] - parameter convergence (3 seconds, 217 milliseconds)
[info] - predictions (340 milliseconds)
[info] - training and prediction (13 seconds, 612 milliseconds)
[info] - handling empty RDDs in a stream (840 milliseconds)
[info] BreezeVectorConversionSuite:
[info] - dense to breeze (1 millisecond)
[info] - sparse to breeze (0 milliseconds)
[info] - dense breeze to vector (1 millisecond)
[info] - sparse breeze to vector (0 milliseconds)
[info] - sparse breeze with partially-used arrays to vector (1 millisecond)
[info] MatrixUDTSuite:
[info] - preloaded MatrixUDT (4 milliseconds)
[info] KMeansPMMLModelExportSuite:
[info] - KMeansPMMLModelExport generate PMML format (1 millisecond)
[info] NGramSuite:
[info] - default behavior yields bigram features (777 milliseconds)
[info] - NGramLength=4 yields length 4 n-grams (395 milliseconds)
[info] - empty input yields empty output (414 milliseconds)
[info] - input array < n yields empty output (386 milliseconds)
[info] - read/write (438 milliseconds)
[info] PCASuite:
[info] - params (53 milliseconds)
[info] - pca (1 second, 160 milliseconds)
[info] - PCA read/write (392 milliseconds)
[info] - PCAModel read/write (1 second, 450 milliseconds)
[info] KMeansSuite:
[info] - default parameters (1 second, 276 milliseconds)
[info] - set parameters (0 milliseconds)
[info] - parameters validation (1 millisecond)
[info] - fit, transform and summary (765 milliseconds)
[info] - KMeansModel transform with non-default feature and prediction cols (380 milliseconds)
[info] - KMeans using cosine distance (771 milliseconds)
[info] - KMeans with cosine distance is not supported for 0-length vectors (145 milliseconds)
[info] - KMean with Array input (1 second, 217 milliseconds)
[info] - read/write (2 seconds, 226 milliseconds)
[info] - pmml export (4 seconds, 980 milliseconds)
[info] - prediction on single instance (399 milliseconds)
[info] - compare with weightCol and without weightCol (1 second, 50 milliseconds)
[info] - Two centers with weightCol (977 milliseconds)
[info] - Four centers with weightCol (9 seconds, 947 milliseconds)
[info] StreamingTestSuite:
[info] - accuracy for null hypothesis using welch t-test (2 seconds, 620 milliseconds)
[info] - accuracy for alternative hypothesis using welch t-test (285 milliseconds)
[info] - accuracy for null hypothesis using student t-test (348 milliseconds)
[info] - accuracy for alternative hypothesis using student t-test (299 milliseconds)
[info] - batches within same test window are grouped (445 milliseconds)
[info] - entries in peace period are dropped (288 milliseconds)
[info] - null hypothesis when only data from one group is present (289 milliseconds)
[info] ANOVATestSuite:
[info] - test DataFrame of labeled points (1 second, 159 milliseconds)
11:12:12.140 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 2.1.0
11:12:12.140 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 2.1.0, comment = Set by MetaStore jenkins@192.168.10.24
[info] - test DataFrame with sparse vector (597 milliseconds)
[info] RandomRDDsSuite:
[info] - RandomRDD sizes (104 milliseconds)
11:12:14.339 WARN org.apache.hadoop.hive.ql.session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
[info] - randomRDD for different distributions (3 seconds, 133 milliseconds)
[info] - randomVectorRDD for different distributions (2 seconds, 220 milliseconds)
[info] GaussianMixtureSuite:
[info] - gmm fails on high dimensional data (296 milliseconds)
[info] - single cluster (1 second, 310 milliseconds)
[info] - two clusters (268 milliseconds)
[info] - two clusters with distributed decompositions (350 milliseconds)
[info] - single cluster with sparse data (3 seconds, 553 milliseconds)
[info] - two clusters with sparse data (123 milliseconds)
[info] - model save / load (1 second, 105 milliseconds)
[info] - model prediction, parallel and local (143 milliseconds)
11:12:25.727 WARN org.apache.spark.sql.hive.client.Shim_v2_2: Caught Hive MetaException attempting to get partition metadata by filter from Hive. Falling back to fetching all partition metadata, which will degrade performance. Modifying your Hive metastore configuration to set hive.metastore.try.direct.sql to true may resolve this problem.
java.lang.reflect.InvocationTargetException
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.apache.spark.sql.hive.client.Shim_v0_13.getPartitionsByFilter(HiveShim.scala:810)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$getPartitionsByFilter$1(HiveClientImpl.scala:745)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:294)
	at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:227)
	at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:226)
	at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:276)
	at org.apache.spark.sql.hive.client.HiveClientImpl.getPartitionsByFilter(HiveClientImpl.scala:743)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuite.$anonfun$new$1(HivePartitionFilteringSuite.scala:105)
	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:151)
	at org.scalatest.FunSuiteLike.invokeWithFixture$1(FunSuiteLike.scala:184)
	at org.scalatest.FunSuiteLike.$anonfun$runTest$1(FunSuiteLike.scala:196)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:286)
	at org.scalatest.FunSuiteLike.runTest(FunSuiteLike.scala:196)
	at org.scalatest.FunSuiteLike.runTest$(FunSuiteLike.scala:178)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:58)
	at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:221)
	at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:214)
	at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:58)
	at org.scalatest.FunSuiteLike.$anonfun$runTests$1(FunSuiteLike.scala:229)
	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:393)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:381)
	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:376)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:458)
	at org.scalatest.FunSuiteLike.runTests(FunSuiteLike.scala:229)
	at org.scalatest.FunSuiteLike.runTests$(FunSuiteLike.scala:228)
	at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
	at org.scalatest.Suite.run(Suite.scala:1124)
	at org.scalatest.Suite.run$(Suite.scala:1106)
	at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
	at org.scalatest.FunSuiteLike.$anonfun$run$1(FunSuiteLike.scala:233)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:518)
	at org.scalatest.FunSuiteLike.run(FunSuiteLike.scala:233)
	at org.scalatest.FunSuiteLike.run$(FunSuiteLike.scala:232)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:58)
	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:58)
	at org.scalatest.Suite.callExecuteOnSuite$1(Suite.scala:1187)
	at org.scalatest.Suite.$anonfun$runNestedSuites$1(Suite.scala:1234)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at org.scalatest.Suite.runNestedSuites(Suite.scala:1232)
	at org.scalatest.Suite.runNestedSuites$(Suite.scala:1166)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuites.runNestedSuites(HivePartitionFilteringSuites.scala:24)
	at org.scalatest.Suite.run(Suite.scala:1121)
	at org.scalatest.Suite.run$(Suite.scala:1106)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuites.run(HivePartitionFilteringSuites.scala:24)
	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:317)
	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:510)
	at sbt.ForkMain$Run$2.call(ForkMain.java:296)
	at sbt.ForkMain$Run$2.call(ForkMain.java:286)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: MetaException(message:Filtering is supported only on partition keys of type string)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$FilterBuilder.setError(ExpressionTree.java:184)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.getJdoFilterPushdownParam(ExpressionTree.java:439)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.generateJDOFilterOverPartitions(ExpressionTree.java:356)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.generateJDOFilter(ExpressionTree.java:278)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree.generateJDOFilterFragment(ExpressionTree.java:583)
	at org.apache.hadoop.hive.metastore.ObjectStore.makeQueryFilterString(ObjectStore.java:3030)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsViaOrmFilter(ObjectStore.java:2582)
	at org.apache.hadoop.hive.metastore.ObjectStore.access$500(ObjectStore.java:176)
	at org.apache.hadoop.hive.metastore.ObjectStore$6.getJdoResult(ObjectStore.java:2963)
	at org.apache.hadoop.hive.metastore.ObjectStore$6.getJdoResult(ObjectStore.java:2947)
	at org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:2772)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilterInternal(ObjectStore.java:2965)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilter(ObjectStore.java:2704)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101)
	at com.sun.proxy.$Proxy169.getPartitionsByFilter(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_filter(HiveMetaStore.java:4821)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
	at com.sun.proxy.$Proxy170.get_partitions_by_filter(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitionsByFilter(HiveMetaStoreClient.java:1228)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:173)
	at com.sun.proxy.$Proxy171.listPartitionsByFilter(Unknown Source)
	at org.apache.hadoop.hive.ql.metadata.Hive.getPartitionsByFilter(Hive.java:2577)
	... 68 more
[info] MultiClassSummarizerSuite:
[info] - MultiClassSummarizer (1 millisecond)
[info] - MultiClassSummarizer with weighted samples (0 milliseconds)
[info] IterativelyReweightedLeastSquaresSuite:
[info] - IRLS against GLM with Binomial errors (490 milliseconds)
[info] - 2.2: getPartitionsByFilter returns all partitions when hive.metastore.try.direct.sql=false (2 minutes, 38 seconds)
[info] - IRLS against GLM with Poisson errors (223 milliseconds)
[info] - 2.2: getPartitionsByFilter: ds<=>20170101 (304 milliseconds)
[info] - IRLS against L1Regression (329 milliseconds)
[info] PipelineSuite:
[info] - 2.2: getPartitionsByFilter: ds=20170101 (248 milliseconds)
[info] - 2.2: getPartitionsByFilter: ds=(20170101 + 1) and h=0 (183 milliseconds)
[info] - 2.2: getPartitionsByFilter: chunk='aa' (205 milliseconds)
[info] - 2.2: getPartitionsByFilter: cast(chunk as int)=1 (not a valid partition predicate) (112 milliseconds)
[info] - pipeline (629 milliseconds)
[info] - pipeline with duplicate stages (2 milliseconds)
[info] - Pipeline.copy (1 millisecond)
[info] - PipelineModel.copy (0 milliseconds)
[info] - pipeline model constructors (0 milliseconds)
[info] - 2.2: getPartitionsByFilter: cast(chunk as boolean)=true (not a valid partition predicate) (103 milliseconds)
[info] - 2.2: getPartitionsByFilter: 20170101=ds (130 milliseconds)
[info] - 2.2: getPartitionsByFilter: ds=20170101 and h=2 (226 milliseconds)
[info] - Pipeline read/write (525 milliseconds)
[info] - 2.2: getPartitionsByFilter: cast(ds as long)=20170101L and h=2 (83 milliseconds)
[info] - Pipeline read/write with non-Writable stage (16 milliseconds)
[info] - 2.2: getPartitionsByFilter: ds=20170101 or ds=20170102 (186 milliseconds)
[info] - 2.2: getPartitionsByFilter: ds in (20170102, 20170103) (using IN expression) (170 milliseconds)
[info] - 2.2: getPartitionsByFilter: cast(ds as long) in (20170102L, 20170103L) (using IN expression) (81 milliseconds)
[info] - PipelineModel read/write (516 milliseconds)
[info] - PipelineModel read/write: getStagePath (1 millisecond)
[info] - PipelineModel read/write with non-Writable stage (1 millisecond)
[info] - 2.2: getPartitionsByFilter: ds in (20170102, 20170103) (using INSET expression) (113 milliseconds)
[info] - pipeline validateParams (22 milliseconds)
[info] - Pipeline.setStages should handle Java Arrays being non-covariant (1 millisecond)
[info] - 2.2: getPartitionsByFilter: cast(ds as long) in (20170102L, 20170103L) (using INSET expression) (90 milliseconds)
[info] PrefixSpanSuite:
[info] - 2.2: getPartitionsByFilter: chunk in ('ab', 'ba') (using IN expression) (160 milliseconds)
[info] - 2.2: getPartitionsByFilter: chunk in ('ab', 'ba') (using INSET expression) (63 milliseconds)
[info] - 2.2: getPartitionsByFilter: (ds=20170101 and h>=2) or (ds=20170102 and h<2) (248 milliseconds)
[info] - 2.2: getPartitionsByFilter: (ds=20170101 and h>=2) or (ds=20170102 and h<(1+1)) (196 milliseconds)
[info] - PrefixSpan internal (integer seq, 0 delim) run, singleton itemsets (686 milliseconds)
[info] - PrefixSpan internal (integer seq, -1 delim) run, variable-size itemsets (65 milliseconds)
[info] - 2.2: getPartitionsByFilter: chunk in ('ab', 'ba') and ((ds=20170101 and h>=2) or (ds=20170102 and h<2)) (245 milliseconds)
[info] - PrefixSpan projections with multiple partial starts (542 milliseconds)
[info] - PrefixSpan Integer type, variable-size itemsets (148 milliseconds)
[info] - PrefixSpan String type, variable-size itemsets (168 milliseconds)
[info] - PrefixSpan pre-processing's cleaning test (46 milliseconds)
[info] - model save/load (894 milliseconds)
[info] ProbabilisticClassifierSuite:
[info] - test thresholding (366 milliseconds)
[info] - test thresholding not required (0 milliseconds)
[info] - test tiebreak (0 milliseconds)
[info] - test one zero threshold (1 millisecond)
[info] - bad thresholds (1 millisecond)
[info] - normalizeToProbabilitiesInPlace (1 millisecond)
[info] Word2VecSuite:
[info] - Word2Vec (209 milliseconds)
[info] - Word2Vec throws exception when vocabulary is empty (35 milliseconds)
[info] - Word2VecModel (0 milliseconds)
[info] - findSynonyms doesn't reject similar word vectors when called with a vector (0 milliseconds)
[info] - model load / save (525 milliseconds)
[info] - big model load / save (338 milliseconds)
[info] - test similarity for word vectors with large values is not Infinity or NaN (2 milliseconds)
[info] FunctionsSuite:
[info] - test vector_to_array (376 milliseconds)
[info] StreamingKMeansSuite:
[info] - accuracy for single center and equivalence to grand average (1 second, 31 milliseconds)
[info] - accuracy for two centers (796 milliseconds)
[info] - detecting dying clusters (859 milliseconds)
[info] - SPARK-7946 setDecayFactor (0 milliseconds)
11:12:36.385 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[info] RFormulaSuite:
[info] - params (1 millisecond)
[info] - transform numeric data (426 milliseconds)
[info] - features column already exists (21 milliseconds)
[info] - label column already exists and forceIndexLabel was set with false (308 milliseconds)
[info] - label column already exists but forceIndexLabel was set with true (4 milliseconds)
[info] - label column already exists but is not numeric type (44 milliseconds)
[info] - allow missing label column for test datasets (274 milliseconds)
[info] - allow empty label (323 milliseconds)
[info] - encodes string terms (596 milliseconds)
[info] - encodes string terms with string indexer order type (1 second, 664 milliseconds)
[info] - test consistency with R when encoding string terms (351 milliseconds)
[info] - formula w/o intercept, we should output reference category when encoding string terms (1 second, 1 millisecond)
[info] - index string label (532 milliseconds)
[info] - force to index label even it is numeric type (573 milliseconds)
[info] - attribute generation (458 milliseconds)
[info] - vector attribute generation (340 milliseconds)
[info] - vector attribute generation with unnamed input attrs (264 milliseconds)
[info] - numeric interaction (356 milliseconds)
[info] - factor numeric interaction (393 milliseconds)
[info] - factor factor interaction (872 milliseconds)
[info] - read/write: RFormula (201 milliseconds)
[info] - read/write: RFormulaModel (4 seconds, 818 milliseconds)
[info] - should support all NumericType labels (154 milliseconds)
[info] - handle unseen features or labels (3 seconds, 109 milliseconds)
[info] - Use Vectors as inputs to formula. (515 milliseconds)
[info] - SPARK-23562 RFormula handleInvalid should handle invalid values in non-string columns. (708 milliseconds)
[info] ALSSuite:
[info] - rank-1 matrices (1 second, 365 milliseconds)
[info] - rank-1 matrices bulk (1 second, 458 milliseconds)
[info] - rank-2 matrices (1 second, 189 milliseconds)
[info] - rank-2 matrices bulk (10 seconds, 465 milliseconds)
[info] - rank-1 matrices implicit (3 seconds, 723 milliseconds)
[info] - rank-1 matrices implicit bulk (2 seconds, 101 milliseconds)
[info] - rank-2 matrices implicit (1 second, 788 milliseconds)
[info] - rank-2 matrices implicit bulk (9 seconds, 715 milliseconds)
[info] - rank-2 matrices implicit negative (1 second, 675 milliseconds)
[info] - rank-2 matrices with different user and product blocks (1 second, 333 milliseconds)
[info] - pseudorandomness (1 second, 57 milliseconds)
[info] - Storage Level for RDDs in model (919 milliseconds)
[info] - negative ids (3 seconds, 315 milliseconds)
11:13:38.703 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 2.1.0
11:13:38.703 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 2.1.0, comment = Set by MetaStore jenkins@192.168.10.24
11:13:38.715 WARN org.apache.hadoop.hive.metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException
[info] - NNALS, rank 2 (14 seconds, 340 milliseconds)
[info] - SPARK-18268: ALS with empty RDD should fail with better message (24 milliseconds)
[info] PrefixSpanSuite:
[info] - PrefixSpan projections with multiple partial starts (542 milliseconds)
[info] - PrefixSpan Integer type, variable-size itemsets (194 milliseconds)
[info] - PrefixSpan input row with nulls (225 milliseconds)
[info] - PrefixSpan String type, variable-size itemsets (227 milliseconds)
[info] NumericParserSuite:
[info] - parser (1 millisecond)
[info] - parser with whitespaces (1 millisecond)
[info] MLTestSuite:
[info] - 2.2: create client with sharesHadoopClasses = false (1 minute, 22 seconds)
[info] HivePartitionFilteringSuite(2.3):
[info] - test transformer on stream data (828 milliseconds)
[info] MaxAbsScalerSuite:
[info] - MaxAbsScaler fit basic case (640 milliseconds)
[info] - MaxAbsScaler read/write (224 milliseconds)
[info] - MaxAbsScalerModel read/write (997 milliseconds)
[info] HuberAggregatorSuite:
[info] - aggregator add method should check input size (16 milliseconds)
[info] - negative weight (17 milliseconds)
[info] - check sizes (29 milliseconds)
[info] - check correctness (47 milliseconds)
[info] - check with zero standard deviation (32 milliseconds)
[info] ElementwiseProductSuite:
[info] - streaming transform (319 milliseconds)
[info] - read/write (176 milliseconds)
[info] GaussianMixtureSuite:
[info] - gmm fails on high dimensional data (317 milliseconds)
[info] - default parameters (710 milliseconds)
[info] - set parameters (0 milliseconds)
[info] - parameters validation (1 millisecond)
[info] - fit, transform and summary (654 milliseconds)
[info] - read/write (1 second, 682 milliseconds)
[info] - univariate dense/sparse data with two clusters (4 seconds, 772 milliseconds)
[info] - multivariate data and check againt R mvnormalmixEM (506 milliseconds)
[info] - upper triangular matrix unpacking (0 milliseconds)
[info] - GaussianMixture with Array input (1 second, 127 milliseconds)
[info] - GMM support instance weighting (15 seconds, 757 milliseconds)
[info] - prediction on single instance (468 milliseconds)
Build timed out (after 300 minutes). Marking the build as failed.
Build was aborted
Archiving artifacts
[info] - GMM on blocks *** FAILED *** (20 seconds, 757 milliseconds)
[info]   org.apache.spark.SparkException: Job aborted due to stage failure: Task serialization failed: java.lang.InterruptedException
[info] java.lang.InterruptedException
[info] 	at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1367)
[info] 	at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:248)
[info] 	at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:258)
[info] 	at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:263)
[info] 	at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:294)
[info] 	at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
[info] 	at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:103)
[info] 	at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:87)
[info] 	at org.apache.spark.storage.BlockManagerMaster.updateBlockInfo(BlockManagerMaster.scala:78)
[info] 	at org.apache.spark.storage.BlockManager.tryToReportBlockStatus(BlockManager.scala:744)
[info] 	at org.apache.spark.storage.BlockManager.org$apache$spark$storage$BlockManager$$reportBlockStatus(BlockManager.scala:723)
[info] 	at org.apache.spark.storage.BlockManager$BlockStoreUpdater.$anonfun$save$1(BlockManager.scala:352)
[info] 	at org.apache.spark.storage.BlockManager.org$apache$spark$storage$BlockManager$$doPut(BlockManager.scala:1302)
[info] 	at org.apache.spark.storage.BlockManager$BlockStoreUpdater.save(BlockManager.scala:316)
[info] 	at org.apache.spark.storage.BlockManager.putBytes(BlockManager.scala:1265)
[info] 	at org.apache.spark.broadcast.TorrentBroadcast.$anonfun$writeBlocks$1(TorrentBroadcast.scala:147)
[info] 	at org.apache.spark.broadcast.TorrentBroadcast.$anonfun$writeBlocks$1$adapted(TorrentBroadcast.scala:141)
[info] 	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
[info] 	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
[info] 	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
[info] 	at org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:141)
[info] 	at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:91)
[info] 	at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:35)
[info] 	at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:77)
[info] 	at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1479)
[info] 	at org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:1293)
[info] 	at org.apache.spark.scheduler.DAGScheduler.submitStage(DAGScheduler.scala:1166)
[info] 	at org.apache.spark.scheduler.DAGScheduler.$anonfun$submitStage$5(DAGScheduler.scala:1169)
[info] 	at org.apache.spark.scheduler.DAGScheduler.$anonfun$submitStage$5$adapted(DAGScheduler.scala:1168)
[info] 	at scala.collection.immutable.List.foreach(List.scala:392)
[info] 	at org.apache.spark.scheduler.DAGScheduler.submitStage(DAGScheduler.scala:1168)
[info] 	at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:1109)
[info] 	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2254)
[info] 	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2246)
[info] 	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2235)
[info] 	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
[info]   at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2117)
[info]   at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2066)
[info]   at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2065)
[info]   at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
[info]   at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
[info]   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
[info]   at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2065)
[info]   at org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:1303)
[info]   at org.apache.spark.scheduler.DAGScheduler.submitStage(DAGScheduler.scala:1166)
[info]   at org.apache.spark.scheduler.DAGScheduler.$anonfun$submitStage$5(DAGScheduler.scala:1169)
[info]   at org.apache.spark.scheduler.DAGScheduler.$anonfun$submitStage$5$adapted(DAGScheduler.scala:1168)
[info]   at scala.collection.immutable.List.foreach(List.scala:392)
[info]   at org.apache.spark.scheduler.DAGScheduler.submitStage(DAGScheduler.scala:1168)
Recording test results
Finished: FAILURE