Console Output

Started by user sknapp
Running as SYSTEM
[EnvInject] - Loading node environment variables.
[EnvInject] - Preparing an environment for the build.
[EnvInject] - Keeping Jenkins system variables.
[EnvInject] - Keeping Jenkins build variables.
[EnvInject] - Injecting as environment variables the properties content 
AMPLAB_JENKINS_BUILD_PROFILE=hadoop2.6
SPARK_BRANCH=branch-2.4
PATH=/home/anaconda/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
LANG=en_US.UTF-8
SPARK_TESTING=1
JAVA_HOME=/usr/java/latest
AMPLAB_JENKINS="true"

[EnvInject] - Variables injected successfully.
[EnvInject] - Injecting contributions.
Building remotely on research-jenkins-worker-03 (ubuntu20 ubuntu) in workspace /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6
The recommended git tool is: NONE
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/spark.git
 > git init /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6 # timeout=10
Using reference repository: /home/jenkins/gitcaches/spark.reference
Fetching upstream changes from https://github.com/apache/spark.git
 > git --version # timeout=10
 > git --version # 'git version 2.25.1'
 > git fetch --tags --force --progress -- https://github.com/apache/spark.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/apache/spark.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
Avoid second fetch
 > git rev-parse origin/branch-2.4^{commit} # timeout=10
Checking out Revision 163fbd2528a18bf062bddf7b7753631a12a369b5 (origin/branch-2.4)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 163fbd2528a18bf062bddf7b7753631a12a369b5 # timeout=10
Commit message: "Preparing Spark release v2.4.8-rc4"
 > git rev-list --no-walk 163fbd2528a18bf062bddf7b7753631a12a369b5 # timeout=10
[spark-branch-2.4-test-sbt-hadoop-2.6] $ /bin/bash /tmp/jenkins1290208459815440259.sh
+++ dirname /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/install-dev.sh
++ cd /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R
++ pwd
+ FWDIR=/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R
+ LIB_DIR=/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/lib
+ mkdir -p /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/lib
+ pushd /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R
+ . /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/find-r.sh
++ '[' -z '' ']'
++ '[' '!' -z '' ']'
+++ command -v R
++ '[' '!' /usr/bin/R ']'
++++ which R
+++ dirname /usr/bin/R
++ R_SCRIPT_PATH=/usr/bin
++ echo 'Using R_SCRIPT_PATH = /usr/bin'
Using R_SCRIPT_PATH = /usr/bin
+ . /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/create-rd.sh
++ set -o pipefail
++ set -e
++++ dirname /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/create-rd.sh
+++ cd /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R
+++ pwd
++ FWDIR=/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R
++ pushd /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R
++ . /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/find-r.sh
+++ '[' -z /usr/bin ']'
++ /usr/bin/Rscript -e ' if("devtools" %in% rownames(installed.packages())) { library(devtools); devtools::document(pkg="./pkg", roclets=c("rd")) }'
Loading required package: usethis
Updating SparkR documentation
First time using roxygen2. Upgrading automatically...
Updating roxygen version in /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/pkg/DESCRIPTION
Loading SparkR
Creating a new generic function for ‘as.data.frame’ in package ‘SparkR’
Creating a new generic function for ‘colnames’ in package ‘SparkR’
Creating a new generic function for ‘colnames<-’ in package ‘SparkR’
Creating a new generic function for ‘cov’ in package ‘SparkR’
Creating a new generic function for ‘drop’ in package ‘SparkR’
Creating a new generic function for ‘na.omit’ in package ‘SparkR’
Creating a new generic function for ‘filter’ in package ‘SparkR’
Creating a new generic function for ‘intersect’ in package ‘SparkR’
Creating a new generic function for ‘sample’ in package ‘SparkR’
Creating a new generic function for ‘transform’ in package ‘SparkR’
Creating a new generic function for ‘subset’ in package ‘SparkR’
Creating a new generic function for ‘summary’ in package ‘SparkR’
Creating a new generic function for ‘union’ in package ‘SparkR’
Creating a new generic function for ‘endsWith’ in package ‘SparkR’
Creating a new generic function for ‘startsWith’ in package ‘SparkR’
Creating a new generic function for ‘lag’ in package ‘SparkR’
Creating a new generic function for ‘rank’ in package ‘SparkR’
Creating a new generic function for ‘sd’ in package ‘SparkR’
Creating a new generic function for ‘var’ in package ‘SparkR’
Creating a new generic function for ‘window’ in package ‘SparkR’
Creating a new generic function for ‘predict’ in package ‘SparkR’
Creating a new generic function for ‘rbind’ in package ‘SparkR’
Creating a generic function for ‘substr’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘%in%’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘lapply’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘Filter’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘nrow’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘ncol’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘factorial’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘atan2’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘ifelse’ from package ‘base’ in package ‘SparkR’
Warning: [/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/pkg/R/SQLContext.R:592] @name May only use one @name per block
Warning: [/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/pkg/R/SQLContext.R:733] @name May only use one @name per block
Writing structType.Rd
Writing print.structType.Rd
Writing structField.Rd
Writing print.structField.Rd
Writing summarize.Rd
Writing alias.Rd
Writing arrange.Rd
Writing as.data.frame.Rd
Writing cache.Rd
Writing checkpoint.Rd
Writing coalesce.Rd
Writing collect.Rd
Writing columns.Rd
Writing coltypes.Rd
Writing count.Rd
Writing cov.Rd
Writing corr.Rd
Writing createOrReplaceTempView.Rd
Writing cube.Rd
Writing dapply.Rd
Writing dapplyCollect.Rd
Writing gapply.Rd
Writing gapplyCollect.Rd
Writing describe.Rd
Writing distinct.Rd
Writing drop.Rd
Writing dropDuplicates.Rd
Writing nafunctions.Rd
Writing dtypes.Rd
Writing explain.Rd
Writing except.Rd
Writing exceptAll.Rd
Writing filter.Rd
Writing first.Rd
Writing groupBy.Rd
Writing hint.Rd
Writing insertInto.Rd
Writing intersect.Rd
Writing intersectAll.Rd
Writing isLocal.Rd
Writing isStreaming.Rd
Writing limit.Rd
Writing localCheckpoint.Rd
Writing merge.Rd
Writing mutate.Rd
Writing orderBy.Rd
Writing persist.Rd
Writing printSchema.Rd
Writing registerTempTable-deprecated.Rd
Writing rename.Rd
Writing repartition.Rd
Writing repartitionByRange.Rd
Writing sample.Rd
Writing rollup.Rd
Writing sampleBy.Rd
Writing saveAsTable.Rd
Writing take.Rd
Writing write.df.Rd
Writing write.jdbc.Rd
Writing write.json.Rd
Writing write.orc.Rd
Writing write.parquet.Rd
Writing write.stream.Rd
Writing write.text.Rd
Writing schema.Rd
Writing select.Rd
Writing selectExpr.Rd
Writing showDF.Rd
Writing subset.Rd
Writing summary.Rd
Writing union.Rd
Writing unionByName.Rd
Writing unpersist.Rd
Writing with.Rd
Writing withColumn.Rd
Writing withWatermark.Rd
Writing randomSplit.Rd
Writing broadcast.Rd
Writing columnfunctions.Rd
Writing between.Rd
Writing cast.Rd
Writing endsWith.Rd
Writing startsWith.Rd
Writing column_nonaggregate_functions.Rd
Writing otherwise.Rd
Writing over.Rd
Writing eq_null_safe.Rd
Writing partitionBy.Rd
Writing rowsBetween.Rd
Writing rangeBetween.Rd
Writing windowPartitionBy.Rd
Writing windowOrderBy.Rd
Writing column_datetime_diff_functions.Rd
Writing column_aggregate_functions.Rd
Writing column_collection_functions.Rd
Writing column_string_functions.Rd
Writing avg.Rd
Writing column_math_functions.Rd
Writing column.Rd
Writing column_misc_functions.Rd
Writing column_window_functions.Rd
Writing column_datetime_functions.Rd
Writing last.Rd
Writing not.Rd
Writing fitted.Rd
Writing predict.Rd
Writing rbind.Rd
Writing spark.als.Rd
Writing spark.bisectingKmeans.Rd
Writing spark.gaussianMixture.Rd
Writing spark.gbt.Rd
Writing spark.glm.Rd
Writing spark.isoreg.Rd
Writing spark.kmeans.Rd
Writing spark.kstest.Rd
Writing spark.lda.Rd
Writing spark.logit.Rd
Writing spark.mlp.Rd
Writing spark.naiveBayes.Rd
Writing spark.decisionTree.Rd
Writing spark.randomForest.Rd
Writing spark.survreg.Rd
Writing spark.svmLinear.Rd
Writing spark.fpGrowth.Rd
Writing write.ml.Rd
Writing awaitTermination.Rd
Writing isActive.Rd
Writing lastProgress.Rd
Writing queryName.Rd
Writing status.Rd
Writing stopQuery.Rd
Writing print.jobj.Rd
Writing show.Rd
Writing substr.Rd
Writing match.Rd
Writing GroupedData.Rd
Writing pivot.Rd
Writing SparkDataFrame.Rd
Writing storageLevel.Rd
Writing toJSON.Rd
Writing nrow.Rd
Writing ncol.Rd
Writing dim.Rd
Writing head.Rd
Writing join.Rd
Writing crossJoin.Rd
Writing attach.Rd
Writing str.Rd
Writing histogram.Rd
Writing getNumPartitions.Rd
Writing sparkR.conf.Rd
Writing sparkR.version.Rd
Writing createDataFrame.Rd
Writing read.json.Rd
Writing read.orc.Rd
Writing read.parquet.Rd
Writing read.text.Rd
Writing sql.Rd
Writing tableToDF.Rd
Writing read.df.Rd
Writing read.jdbc.Rd
Writing read.stream.Rd
Writing WindowSpec.Rd
Writing createExternalTable-deprecated.Rd
Writing createTable.Rd
Writing cacheTable.Rd
Writing uncacheTable.Rd
Writing clearCache.Rd
Writing dropTempTable-deprecated.Rd
Writing dropTempView.Rd
Writing tables.Rd
Writing tableNames.Rd
Writing currentDatabase.Rd
Writing setCurrentDatabase.Rd
Writing listDatabases.Rd
Writing listTables.Rd
Writing listColumns.Rd
Writing listFunctions.Rd
Writing recoverPartitions.Rd
Writing refreshTable.Rd
Writing refreshByPath.Rd
Writing spark.addFile.Rd
Writing spark.getSparkFilesRootDirectory.Rd
Writing spark.getSparkFiles.Rd
Writing spark.lapply.Rd
Writing setLogLevel.Rd
Writing setCheckpointDir.Rd
Writing install.spark.Rd
Writing sparkR.callJMethod.Rd
Writing sparkR.callJStatic.Rd
Writing sparkR.newJObject.Rd
Writing LinearSVCModel-class.Rd
Writing LogisticRegressionModel-class.Rd
Writing MultilayerPerceptronClassificationModel-class.Rd
Writing NaiveBayesModel-class.Rd
Writing BisectingKMeansModel-class.Rd
Writing GaussianMixtureModel-class.Rd
Writing KMeansModel-class.Rd
Writing LDAModel-class.Rd
Writing FPGrowthModel-class.Rd
Writing ALSModel-class.Rd
Writing AFTSurvivalRegressionModel-class.Rd
Writing GeneralizedLinearRegressionModel-class.Rd
Writing IsotonicRegressionModel-class.Rd
Writing glm.Rd
Writing KSTest-class.Rd
Writing GBTRegressionModel-class.Rd
Writing GBTClassificationModel-class.Rd
Writing RandomForestRegressionModel-class.Rd
Writing RandomForestClassificationModel-class.Rd
Writing DecisionTreeRegressionModel-class.Rd
Writing DecisionTreeClassificationModel-class.Rd
Writing read.ml.Rd
Writing sparkR.session.stop.Rd
Writing sparkR.init-deprecated.Rd
Writing sparkRSQL.init-deprecated.Rd
Writing sparkRHive.init-deprecated.Rd
Writing sparkR.session.Rd
Writing sparkR.uiWebUrl.Rd
Writing setJobGroup.Rd
Writing clearJobGroup.Rd
Writing cancelJobGroup.Rd
Writing setJobDescription.Rd
Writing setLocalProperty.Rd
Writing getLocalProperty.Rd
Writing crosstab.Rd
Writing freqItems.Rd
Writing approxQuantile.Rd
Writing StreamingQuery.Rd
Writing hashCode.Rd
+ /usr/bin/R CMD INSTALL --library=/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/lib /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/pkg/
* installing *source* package ‘SparkR’ ...
** using staged installation
** R
** inst
** byte-compile and prepare package for lazy loading
Creating a new generic function for ‘as.data.frame’ in package ‘SparkR’
Creating a new generic function for ‘colnames’ in package ‘SparkR’
Creating a new generic function for ‘colnames<-’ in package ‘SparkR’
Creating a new generic function for ‘cov’ in package ‘SparkR’
Creating a new generic function for ‘drop’ in package ‘SparkR’
Creating a new generic function for ‘na.omit’ in package ‘SparkR’
Creating a new generic function for ‘filter’ in package ‘SparkR’
Creating a new generic function for ‘intersect’ in package ‘SparkR’
Creating a new generic function for ‘sample’ in package ‘SparkR’
Creating a new generic function for ‘transform’ in package ‘SparkR’
Creating a new generic function for ‘subset’ in package ‘SparkR’
Creating a new generic function for ‘summary’ in package ‘SparkR’
Creating a new generic function for ‘union’ in package ‘SparkR’
Creating a new generic function for ‘endsWith’ in package ‘SparkR’
Creating a new generic function for ‘startsWith’ in package ‘SparkR’
Creating a new generic function for ‘lag’ in package ‘SparkR’
Creating a new generic function for ‘rank’ in package ‘SparkR’
Creating a new generic function for ‘sd’ in package ‘SparkR’
Creating a new generic function for ‘var’ in package ‘SparkR’
Creating a new generic function for ‘window’ in package ‘SparkR’
Creating a new generic function for ‘predict’ in package ‘SparkR’
Creating a new generic function for ‘rbind’ in package ‘SparkR’
Creating a generic function for ‘substr’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘%in%’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘lapply’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘Filter’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘nrow’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘ncol’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘factorial’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘atan2’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘ifelse’ from package ‘base’ in package ‘SparkR’
** help
*** installing help indices
** building package indices
** installing vignettes
** testing if installed package can be loaded from temporary location
** testing if installed package can be loaded from final location
** testing if installed package keeps a record of temporary installation path
* DONE (SparkR)
+ cd /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/lib
+ jar cfM /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/lib/sparkr.zip SparkR
+ popd
[info] Using build tool sbt with Hadoop profile hadoop2.6 under environment amplab_jenkins
[info] Found the following changed modules: root
[info] Setup the following environment variables for tests: 

========================================================================
Running Apache RAT checks
========================================================================
Attempting to fetch rat
RAT checks passed.

========================================================================
Running Scala style checks
========================================================================
Scalastyle checks passed.

========================================================================
Running Python style checks
========================================================================
pycodestyle checks passed.
rm -rf _build/*
pydoc checks passed.

========================================================================
Running R style checks
========================================================================

Attaching package: ‘SparkR’

The following objects are masked from ‘package:stats’:

    cov, filter, lag, na.omit, predict, sd, var, window

The following objects are masked from ‘package:base’:

    as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
    rank, rbind, sample, startsWith, subset, summary, transform, union


Attaching package: ‘testthat’

The following objects are masked from ‘package:SparkR’:

    describe, not

Build was aborted
Aborted by sknapp
Archiving artifacts
Terminated
[error] running /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/dev/lint-r ; received return code 143
Recording test results
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files were found. Configuration error?
Finished: ABORTED