Started by remote host 35.243.23.32 [EnvInject] - Loading node environment variables. Building remotely on amp-jenkins-worker-05 (centos spark-test) in workspace /home/jenkins/workspace/NewSparkPullRequestBuilder [WS-CLEANUP] Deleting project workspace... [WS-CLEANUP] Done Cloning the remote Git repository Cloning repository https://github.com/apache/spark.git > /home/jenkins/git2/bin/git init /home/jenkins/workspace/NewSparkPullRequestBuilder # timeout=10 Fetching upstream changes from https://github.com/apache/spark.git > /home/jenkins/git2/bin/git --version # timeout=10 > /home/jenkins/git2/bin/git fetch --tags --progress https://github.com/apache/spark.git +refs/heads/*:refs/remotes/origin/* > /home/jenkins/git2/bin/git config remote.origin.url https://github.com/apache/spark.git # timeout=10 > /home/jenkins/git2/bin/git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 > /home/jenkins/git2/bin/git config remote.origin.url https://github.com/apache/spark.git # timeout=10 Fetching upstream changes from https://github.com/apache/spark.git > /home/jenkins/git2/bin/git fetch --tags --progress https://github.com/apache/spark.git +refs/pull/28526/*:refs/remotes/origin/pr/28526/* > /home/jenkins/git2/bin/git rev-parse refs/remotes/origin/pr/28526/merge^{commit} # timeout=10 > /home/jenkins/git2/bin/git rev-parse refs/remotes/origin/origin/pr/28526/merge^{commit} # timeout=10 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://wiki.jenkins-ci.org/display/JENKINS/Remove+Git+Plugin+BuildsByBranch+BuildData Checking out Revision 9c97eb2eb863f8d2d2c2718c6e48058e8850c851 (refs/remotes/origin/pr/28526/merge) > /home/jenkins/git2/bin/git config core.sparsecheckout # timeout=10 > /home/jenkins/git2/bin/git checkout -f 9c97eb2eb863f8d2d2c2718c6e48058e8850c851 > /home/jenkins/git2/bin/git rev-list 9c97eb2eb863f8d2d2c2718c6e48058e8850c851 # timeout=10 [EnvInject] - Executing scripts and injecting environment variables after the SCM step. [EnvInject] - Injecting as environment variables the properties content JENKINS_MASTER_HOSTNAME=amp-jenkins-master JAVA_HOME=/usr/java/jdk1.8.0_191 JAVA_7_HOME=/usr/java/jdk1.7.0_79 SPARK_TESTING=1 LANG=en_US.UTF-8 [EnvInject] - Variables injected successfully. [NewSparkPullRequestBuilder] $ /bin/bash /tmp/hudson3003940112885362563.sh + export AMPLAB_JENKINS=1 + AMPLAB_JENKINS=1 + export PATH=/home/jenkins/.cargo/bin:/home/anaconda/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.1.1/bin/:/home/android-sdk/:/usr/local/bin:/bin:/usr/bin:/home/anaconda/envs/py3k/bin + PATH=/home/jenkins/.cargo/bin:/home/anaconda/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.1.1/bin/:/home/android-sdk/:/usr/local/bin:/bin:/usr/bin:/home/anaconda/envs/py3k/bin + export PATH=/usr/java/jdk1.8.0_191/bin:/home/jenkins/.cargo/bin:/home/anaconda/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.1.1/bin/:/home/android-sdk/:/usr/local/bin:/bin:/usr/bin:/home/anaconda/envs/py3k/bin + PATH=/usr/java/jdk1.8.0_191/bin:/home/jenkins/.cargo/bin:/home/anaconda/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.1.1/bin/:/home/android-sdk/:/usr/local/bin:/bin:/usr/bin:/home/anaconda/envs/py3k/bin + export PATH=/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.3.9/bin/:/usr/java/jdk1.8.0_191/bin:/home/jenkins/.cargo/bin:/home/anaconda/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.1.1/bin/:/home/android-sdk/:/usr/local/bin:/bin:/usr/bin:/home/anaconda/envs/py3k/bin + PATH=/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.3.9/bin/:/usr/java/jdk1.8.0_191/bin:/home/jenkins/.cargo/bin:/home/anaconda/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.1.1/bin/:/home/android-sdk/:/usr/local/bin:/bin:/usr/bin:/home/anaconda/envs/py3k/bin + export HOME=/home/sparkivy/per-executor-caches/2 + HOME=/home/sparkivy/per-executor-caches/2 + mkdir -p /home/sparkivy/per-executor-caches/2 + export 'SBT_OPTS=-Duser.home=/home/sparkivy/per-executor-caches/2 -Dsbt.ivy.home=/home/sparkivy/per-executor-caches/2/.ivy2' + SBT_OPTS='-Duser.home=/home/sparkivy/per-executor-caches/2 -Dsbt.ivy.home=/home/sparkivy/per-executor-caches/2/.ivy2' + export SPARK_VERSIONS_SUITE_IVY_PATH=/home/sparkivy/per-executor-caches/2/.ivy2 + SPARK_VERSIONS_SUITE_IVY_PATH=/home/sparkivy/per-executor-caches/2/.ivy2 + ./dev/run-tests-jenkins Attempting to post to Github... > Post successful. HEAD is now at 9c97eb2... Merge 9f14144d191c8c41d8b3bd9585d78eb7ddae8407 into 435f12699a5d66b97053c916ff4030efee648522 HEAD is now at 9c97eb2... Merge 9f14144d191c8c41d8b3bd9585d78eb7ddae8407 into 435f12699a5d66b97053c916ff4030efee648522 +++ dirname /home/jenkins/workspace/NewSparkPullRequestBuilder/R/install-dev.sh ++ cd /home/jenkins/workspace/NewSparkPullRequestBuilder/R ++ pwd + FWDIR=/home/jenkins/workspace/NewSparkPullRequestBuilder/R + LIB_DIR=/home/jenkins/workspace/NewSparkPullRequestBuilder/R/lib + mkdir -p /home/jenkins/workspace/NewSparkPullRequestBuilder/R/lib + pushd /home/jenkins/workspace/NewSparkPullRequestBuilder/R + . /home/jenkins/workspace/NewSparkPullRequestBuilder/R/find-r.sh ++ '[' -z '' ']' ++ '[' '!' -z '' ']' +++ command -v R ++ '[' '!' /usr/bin/R ']' ++++ which R +++ dirname /usr/bin/R ++ R_SCRIPT_PATH=/usr/bin ++ echo 'Using R_SCRIPT_PATH = /usr/bin' Using R_SCRIPT_PATH = /usr/bin + . /home/jenkins/workspace/NewSparkPullRequestBuilder/R/create-rd.sh ++ set -o pipefail ++ set -e ++++ dirname /home/jenkins/workspace/NewSparkPullRequestBuilder/R/create-rd.sh +++ cd /home/jenkins/workspace/NewSparkPullRequestBuilder/R +++ pwd ++ FWDIR=/home/jenkins/workspace/NewSparkPullRequestBuilder/R ++ pushd /home/jenkins/workspace/NewSparkPullRequestBuilder/R ++ . /home/jenkins/workspace/NewSparkPullRequestBuilder/R/find-r.sh +++ '[' -z /usr/bin ']' ++ /usr/bin/Rscript -e ' if("devtools" %in% rownames(installed.packages())) { library(devtools); setwd("/home/jenkins/workspace/NewSparkPullRequestBuilder/R"); devtools::document(pkg="./pkg", roclets=c("rd")) }' Updating SparkR documentation Loading SparkR Creating a new generic function for ���as.data.frame��� in package ���SparkR��� Creating a new generic function for ���colnames��� in package ���SparkR��� Creating a new generic function for ���colnames<-��� in package ���SparkR��� Creating a new generic function for ���cov��� in package ���SparkR��� Creating a new generic function for ���drop��� in package ���SparkR��� Creating a new generic function for ���na.omit��� in package ���SparkR��� Creating a new generic function for ���filter��� in package ���SparkR��� Creating a new generic function for ���intersect��� in package ���SparkR��� Creating a new generic function for ���sample��� in package ���SparkR��� Creating a new generic function for ���transform��� in package ���SparkR��� Creating a new generic function for ���subset��� in package ���SparkR��� Creating a new generic function for ���summary��� in package ���SparkR��� Creating a new generic function for ���union��� in package ���SparkR��� Creating a new generic function for ���endsWith��� in package ���SparkR��� Creating a new generic function for ���startsWith��� in package ���SparkR��� Creating a new generic function for ���lag��� in package ���SparkR��� Creating a new generic function for ���rank��� in package ���SparkR��� Creating a new generic function for ���sd��� in package ���SparkR��� Creating a new generic function for ���var��� in package ���SparkR��� Creating a new generic function for ���window��� in package ���SparkR��� Creating a new generic function for ���predict��� in package ���SparkR��� Creating a new generic function for ���rbind��� in package ���SparkR��� Creating a generic function for ���substr��� from package ���base��� in package ���SparkR��� Creating a generic function for ���%in%��� from package ���base��� in package ���SparkR��� Creating a generic function for ���lapply��� from package ���base��� in package ���SparkR��� Creating a generic function for ���Filter��� from package ���base��� in package ���SparkR��� Creating a generic function for ���nrow��� from package ���base��� in package ���SparkR��� Creating a generic function for ���ncol��� from package ���base��� in package ���SparkR��� Creating a generic function for ���factorial��� from package ���base��� in package ���SparkR��� Creating a generic function for ���atan2��� from package ���base��� in package ���SparkR��� Creating a generic function for ���ifelse��� from package ���base��� in package ���SparkR��� First time using roxygen2. Upgrading automatically... Updating roxygen version in /home/jenkins/workspace/NewSparkPullRequestBuilder/R/pkg/DESCRIPTION Writing structType.Rd Writing print.structType.Rd Writing structField.Rd Writing print.structField.Rd Writing summarize.Rd Writing alias.Rd Writing arrange.Rd Writing as.data.frame.Rd Writing cache.Rd Writing checkpoint.Rd Writing coalesce.Rd Writing collect.Rd Writing columns.Rd Writing coltypes.Rd Writing count.Rd Writing cov.Rd Writing corr.Rd Writing createOrReplaceTempView.Rd Writing cube.Rd Writing dapply.Rd Writing dapplyCollect.Rd Writing gapply.Rd Writing gapplyCollect.Rd Writing describe.Rd Writing distinct.Rd Writing drop.Rd Writing dropDuplicates.Rd Writing nafunctions.Rd Writing dtypes.Rd Writing explain.Rd Writing except.Rd Writing exceptAll.Rd Writing filter.Rd Writing first.Rd Writing groupBy.Rd Writing hint.Rd Writing insertInto.Rd Writing intersect.Rd Writing intersectAll.Rd Writing isLocal.Rd Writing isStreaming.Rd Writing limit.Rd Writing localCheckpoint.Rd Writing merge.Rd Writing mutate.Rd Writing orderBy.Rd Writing persist.Rd Writing printSchema.Rd Writing registerTempTable-deprecated.Rd Writing rename.Rd Writing repartition.Rd Writing repartitionByRange.Rd Writing sample.Rd Writing rollup.Rd Writing sampleBy.Rd Writing saveAsTable.Rd Writing take.Rd Writing write.df.Rd Writing write.jdbc.Rd Writing write.json.Rd Writing write.orc.Rd Writing write.parquet.Rd Writing write.stream.Rd Writing write.text.Rd Writing schema.Rd Writing select.Rd Writing selectExpr.Rd Writing showDF.Rd Writing subset.Rd Writing summary.Rd Writing union.Rd Writing unionAll.Rd Writing unionByName.Rd Writing unpersist.Rd Writing with.Rd Writing withColumn.Rd Writing withWatermark.Rd Writing randomSplit.Rd Writing broadcast.Rd Writing columnfunctions.Rd Writing between.Rd Writing cast.Rd Writing endsWith.Rd Writing startsWith.Rd Writing column_nonaggregate_functions.Rd Writing otherwise.Rd Writing over.Rd Writing eq_null_safe.Rd Writing partitionBy.Rd Writing rowsBetween.Rd Writing rangeBetween.Rd Writing windowPartitionBy.Rd Writing windowOrderBy.Rd Writing column_datetime_diff_functions.Rd Writing column_aggregate_functions.Rd Writing column_collection_functions.Rd Writing column_string_functions.Rd Writing avg.Rd Writing column_math_functions.Rd Writing column.Rd Writing column_misc_functions.Rd Writing column_window_functions.Rd Writing column_datetime_functions.Rd Writing last.Rd Writing not.Rd Writing fitted.Rd Writing predict.Rd Writing rbind.Rd Writing spark.als.Rd Writing spark.bisectingKmeans.Rd Writing spark.gaussianMixture.Rd Writing spark.gbt.Rd Writing spark.glm.Rd Writing spark.isoreg.Rd Writing spark.kmeans.Rd Writing spark.kstest.Rd Writing spark.lda.Rd Writing spark.logit.Rd Writing spark.mlp.Rd Writing spark.naiveBayes.Rd Writing spark.decisionTree.Rd Writing spark.randomForest.Rd Writing spark.survreg.Rd Writing spark.svmLinear.Rd Writing spark.fpGrowth.Rd Writing spark.prefixSpan.Rd Writing spark.powerIterationClustering.Rd Writing write.ml.Rd Writing awaitTermination.Rd Writing isActive.Rd Writing lastProgress.Rd Writing queryName.Rd Writing status.Rd Writing stopQuery.Rd Writing print.jobj.Rd Writing show.Rd Writing substr.Rd Writing match.Rd Writing GroupedData.Rd Writing pivot.Rd Writing SparkDataFrame.Rd Writing storageLevel.Rd Writing toJSON.Rd Writing nrow.Rd Writing ncol.Rd Writing dim.Rd Writing head.Rd Writing join.Rd Writing crossJoin.Rd Writing attach.Rd Writing str.Rd Writing histogram.Rd Writing getNumPartitions.Rd Writing sparkR.conf.Rd Writing sparkR.version.Rd Writing createDataFrame.Rd Writing read.json.Rd Writing read.orc.Rd Writing read.parquet.Rd Writing read.text.Rd Writing sql.Rd Writing tableToDF.Rd Writing read.df.Rd Writing read.jdbc.Rd Writing read.stream.Rd Writing WindowSpec.Rd Writing createExternalTable-deprecated.Rd Writing createTable.Rd Writing cacheTable.Rd Writing uncacheTable.Rd Writing clearCache.Rd Writing dropTempTable-deprecated.Rd Writing dropTempView.Rd Writing tables.Rd Writing tableNames.Rd Writing currentDatabase.Rd Writing setCurrentDatabase.Rd Writing listDatabases.Rd Writing listTables.Rd Writing listColumns.Rd Writing listFunctions.Rd Writing recoverPartitions.Rd Writing refreshTable.Rd Writing refreshByPath.Rd Writing spark.addFile.Rd Writing spark.getSparkFilesRootDirectory.Rd Writing spark.getSparkFiles.Rd Writing spark.lapply.Rd Writing setLogLevel.Rd Writing setCheckpointDir.Rd Writing install.spark.Rd Writing sparkR.callJMethod.Rd Writing sparkR.callJStatic.Rd Writing sparkR.newJObject.Rd Writing LinearSVCModel-class.Rd Writing LogisticRegressionModel-class.Rd Writing MultilayerPerceptronClassificationModel-class.Rd Writing NaiveBayesModel-class.Rd Writing BisectingKMeansModel-class.Rd Writing GaussianMixtureModel-class.Rd Writing KMeansModel-class.Rd Writing LDAModel-class.Rd Writing PowerIterationClustering-class.Rd Writing FPGrowthModel-class.Rd Writing PrefixSpan-class.Rd Writing ALSModel-class.Rd Writing AFTSurvivalRegressionModel-class.Rd Writing GeneralizedLinearRegressionModel-class.Rd Writing IsotonicRegressionModel-class.Rd Writing glm.Rd Writing KSTest-class.Rd Writing GBTRegressionModel-class.Rd Writing GBTClassificationModel-class.Rd Writing RandomForestRegressionModel-class.Rd Writing RandomForestClassificationModel-class.Rd Writing DecisionTreeRegressionModel-class.Rd Writing DecisionTreeClassificationModel-class.Rd Writing read.ml.Rd Writing sparkR.session.stop.Rd Writing sparkR.init-deprecated.Rd Writing sparkRSQL.init-deprecated.Rd Writing sparkRHive.init-deprecated.Rd Writing sparkR.session.Rd Writing sparkR.uiWebUrl.Rd Writing setJobGroup.Rd Writing clearJobGroup.Rd Writing cancelJobGroup.Rd Writing setJobDescription.Rd Writing setLocalProperty.Rd Writing getLocalProperty.Rd Writing crosstab.Rd Writing freqItems.Rd Writing approxQuantile.Rd Writing StreamingQuery.Rd Writing hashCode.Rd + /usr/bin/R CMD INSTALL --library=/home/jenkins/workspace/NewSparkPullRequestBuilder/R/lib /home/jenkins/workspace/NewSparkPullRequestBuilder/R/pkg/ * installing *source* package ���SparkR��� ... ** R ** inst ** byte-compile and prepare package for lazy loading Creating a new generic function for ���as.data.frame��� in package ���SparkR��� Creating a new generic function for ���colnames��� in package ���SparkR��� Creating a new generic function for ���colnames<-��� in package ���SparkR��� Creating a new generic function for ���cov��� in package ���SparkR��� Creating a new generic function for ���drop��� in package ���SparkR��� Creating a new generic function for ���na.omit��� in package ���SparkR��� Creating a new generic function for ���filter��� in package ���SparkR��� Creating a new generic function for ���intersect��� in package ���SparkR��� Creating a new generic function for ���sample��� in package ���SparkR��� Creating a new generic function for ���transform��� in package ���SparkR��� Creating a new generic function for ���subset��� in package ���SparkR��� Creating a new generic function for ���summary��� in package ���SparkR��� Creating a new generic function for ���union��� in package ���SparkR��� Creating a new generic function for ���endsWith��� in package ���SparkR��� Creating a new generic function for ���startsWith��� in package ���SparkR��� Creating a new generic function for ���lag��� in package ���SparkR��� Creating a new generic function for ���rank��� in package ���SparkR��� Creating a new generic function for ���sd��� in package ���SparkR��� Creating a new generic function for ���var��� in package ���SparkR��� Creating a new generic function for ���window��� in package ���SparkR��� Creating a new generic function for ���predict��� in package ���SparkR��� Creating a new generic function for ���rbind��� in package ���SparkR��� Creating a generic function for ���substr��� from package ���base��� in package ���SparkR��� Creating a generic function for ���%in%��� from package ���base��� in package ���SparkR��� Creating a generic function for ���lapply��� from package ���base��� in package ���SparkR��� Creating a generic function for ���Filter��� from package ���base��� in package ���SparkR��� Creating a generic function for ���nrow��� from package ���base��� in package ���SparkR��� Creating a generic function for ���ncol��� from package ���base��� in package ���SparkR��� Creating a generic function for ���factorial��� from package ���base��� in package ���SparkR��� Creating a generic function for ���atan2��� from package ���base��� in package ���SparkR��� Creating a generic function for ���ifelse��� from package ���base��� in package ���SparkR��� ** help *** installing help indices converting help for package ���SparkR��� finding HTML links ... done AFTSurvivalRegressionModel-class html ALSModel-class html BisectingKMeansModel-class html DecisionTreeClassificationModel-class html DecisionTreeRegressionModel-class html FPGrowthModel-class html GBTClassificationModel-class html GBTRegressionModel-class html GaussianMixtureModel-class html GeneralizedLinearRegressionModel-class html GroupedData html IsotonicRegressionModel-class html KMeansModel-class html KSTest-class html LDAModel-class html LinearSVCModel-class html LogisticRegressionModel-class html MultilayerPerceptronClassificationModel-class html NaiveBayesModel-class html PowerIterationClustering-class html PrefixSpan-class html RandomForestClassificationModel-class html RandomForestRegressionModel-class html SparkDataFrame html StreamingQuery html WindowSpec html alias html approxQuantile html arrange html as.data.frame html attach html avg html awaitTermination html between html broadcast html cache html cacheTable html cancelJobGroup html cast html checkpoint html clearCache html clearJobGroup html coalesce html collect html coltypes html column html column_aggregate_functions html column_collection_functions html column_datetime_diff_functions html column_datetime_functions html column_math_functions html column_misc_functions html column_nonaggregate_functions html column_string_functions html column_window_functions html columnfunctions html columns html corr html count html cov html createDataFrame html createExternalTable-deprecated html createOrReplaceTempView html createTable html crossJoin html crosstab html cube html currentDatabase html dapply html dapplyCollect html describe html dim html distinct html drop html dropDuplicates html dropTempTable-deprecated html dropTempView html dtypes html endsWith html eq_null_safe html except html exceptAll html explain html filter html first html fitted html freqItems html gapply html gapplyCollect html getLocalProperty html getNumPartitions html glm html groupBy html hashCode html head html hint html histogram html insertInto html install.spark html intersect html intersectAll html isActive html isLocal html isStreaming html join html last html lastProgress html limit html listColumns html listDatabases html listFunctions html listTables html localCheckpoint html match html merge html mutate html nafunctions html ncol html not html nrow html orderBy html otherwise html over html partitionBy html persist html pivot html predict html print.jobj html print.structField html print.structType html printSchema html queryName html randomSplit html rangeBetween html rbind html read.df html read.jdbc html read.json html read.ml html read.orc html read.parquet html read.stream html read.text html recoverPartitions html refreshByPath html refreshTable html registerTempTable-deprecated html rename html repartition html repartitionByRange html rollup html rowsBetween html sample html sampleBy html saveAsTable html schema html select html selectExpr html setCheckpointDir html setCurrentDatabase html setJobDescription html setJobGroup html setLocalProperty html setLogLevel html show html showDF html spark.addFile html spark.als html spark.bisectingKmeans html spark.decisionTree html spark.fpGrowth html spark.gaussianMixture html spark.gbt html spark.getSparkFiles html spark.getSparkFilesRootDirectory html spark.glm html spark.isoreg html spark.kmeans html spark.kstest html spark.lapply html spark.lda html spark.logit html spark.mlp html spark.naiveBayes html spark.powerIterationClustering html spark.prefixSpan html spark.randomForest html spark.survreg html spark.svmLinear html sparkR.callJMethod html sparkR.callJStatic html sparkR.conf html sparkR.init-deprecated html sparkR.newJObject html sparkR.session html sparkR.session.stop html sparkR.uiWebUrl html sparkR.version html sparkRHive.init-deprecated html sparkRSQL.init-deprecated html sql html startsWith html status html stopQuery html storageLevel html str html structField html structType html subset html substr html summarize html summary html tableNames html tableToDF html tables html take html toJSON html uncacheTable html union html unionAll html unionByName html unpersist html windowOrderBy html windowPartitionBy html with html withColumn html withWatermark html write.df html write.jdbc html write.json html write.ml html write.orc html write.parquet html write.stream html write.text html ** building package indices ** installing vignettes ** testing if installed package can be loaded * DONE (SparkR) + cd /home/jenkins/workspace/NewSparkPullRequestBuilder/R/lib + jar cfM /home/jenkins/workspace/NewSparkPullRequestBuilder/R/lib/sparkr.zip SparkR + popd [info] Using build tool sbt with Hadoop profile hadoop2.7 and Hive profile hive2.3 under environment amplab_jenkins From https://github.com/apache/spark * [new branch] branch-3.0 -> branch-3.0 [info] Found the following changed modules: root, hive, catalyst, sql, avro [info] Setup the following environment variables for tests: ======================================================================== Running Apache RAT checks ======================================================================== Attempting to fetch rat RAT checks passed. ======================================================================== Running Scala style checks ======================================================================== [info] Checking Scala style using SBT with these profiles: -Phadoop-2.7 -Phive-2.3 -Pkinesis-asl -Phive-thriftserver -Pmesos -Phadoop-cloud -Pyarn -Pkubernetes -Phive -Pspark-ganglia-lgpl Scalastyle checks passed. ======================================================================== Running build tests ======================================================================== exec: curl -s -L https://downloads.lightbend.com/zinc/0.3.15/zinc-0.3.15.tgz exec: curl -s -L https://downloads.lightbend.com/scala/2.12.10/scala-2.12.10.tgz exec: curl -s -L https://archive.apache.org/dist/maven/maven-3/3.6.3/binaries/apache-maven-3.6.3-bin.tar.gz Using `mvn` from path: /home/jenkins/workspace/NewSparkPullRequestBuilder/build/apache-maven-3.6.3/bin/mvn Using `mvn` from path: /home/jenkins/workspace/NewSparkPullRequestBuilder/build/apache-maven-3.6.3/bin/mvn Performing Maven install for hadoop-2.7-hive-1.2 Falling back to archive.apache.org to download Maven Using `mvn` from path: /home/jenkins/workspace/NewSparkPullRequestBuilder/build/apache-maven-3.6.3/bin/mvn [ERROR] Failed to execute goal org.apache.maven.plugins:maven-install-plugin:3.0.0-M1:install (default-cli) on project spark-parent_2.12: ArtifactInstallerException: Failed to install metadata org.apache.spark:spark-parent_2.12/maven-metadata.xml: Could not parse metadata /home/jenkins/.m2/repository/org/apache/spark/spark-parent_2.12/maven-metadata-local.xml: in epilog non whitespace content is not allowed but got > (position: END_TAG seen ...</metadata>\n>... @13:2) -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException Using `mvn` from path: /home/jenkins/workspace/NewSparkPullRequestBuilder/build/apache-maven-3.6.3/bin/mvn [error] running /home/jenkins/workspace/NewSparkPullRequestBuilder/dev/test-dependencies.sh ; received return code 1 Attempting to post to Github... > Post successful. Build step 'Execute shell' marked build as failure Archiving artifacts Recording test results ERROR: Step ?Publish JUnit test result report? failed: No test report files were found. Configuration error? Finished: FAILURE