Console Output

Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
[EnvInject] - Preparing an environment for the build.
[EnvInject] - Keeping Jenkins system variables.
[EnvInject] - Keeping Jenkins build variables.
[EnvInject] - Injecting as environment variables the properties content 
PATH=/home/anaconda/envs/py36/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
AMPLAB_JENKINS="true"
JAVA_HOME=/usr/java/latest
AMPLAB_JENKINS_BUILD_HIVE_PROFILE=hive2.3
SPARK_TESTING=1
AMPLAB_JENKINS_BUILD_PROFILE=hadoop3.2
LANG=en_US.UTF-8
SPARK_BRANCH=branch-3.0

[EnvInject] - Variables injected successfully.
[EnvInject] - Injecting contributions.
Building remotely on research-jenkins-worker-06 (ubuntu20 ubuntu) in workspace /home/jenkins/workspace/spark-branch-3.0-test-sbt-hadoop-3.2-hive-2.3
The recommended git tool is: NONE
No credentials specified
 > git rev-parse --resolve-git-dir /home/jenkins/workspace/spark-branch-3.0-test-sbt-hadoop-3.2-hive-2.3/.git # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/spark.git # timeout=10
Fetching upstream changes from https://github.com/apache/spark.git
 > git --version # timeout=10
 > git --version # 'git version 2.25.1'
 > git fetch --tags --force --progress -- https://github.com/apache/spark.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git rev-parse origin/branch-3.0^{commit} # timeout=10
Checking out Revision 1709265af1589ffa9e44d050bfa913aa0fd27dea (origin/branch-3.0)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 1709265af1589ffa9e44d050bfa913aa0fd27dea # timeout=10
Commit message: "[SPARK-23626][CORE] Eagerly compute RDD.partitions on entire DAG when submitting job to DAGScheduler"
 > git rev-list --no-walk 86bf5d345d7809e836bad2f6946253889eae7656 # timeout=10
[spark-branch-3.0-test-sbt-hadoop-3.2-hive-2.3] $ /bin/bash /tmp/jenkins8479098508624381798.sh
Removing R/lib/
Removing R/pkg/man/
Removing assembly/target/
Removing build/sbt-launch-0.13.18.jar
Removing common/kvstore/target/
Removing common/network-common/target/
Removing common/network-shuffle/target/
Removing common/network-yarn/target/
Removing common/sketch/target/
Removing common/tags/target/
Removing common/unsafe/target/
Removing core/target/
Removing dev/__pycache__/
Removing dev/create-release/__pycache__/
Removing dev/sparktestsupport/__pycache__/
Removing examples/src/main/python/__pycache__/
Removing examples/src/main/python/ml/__pycache__/
Removing examples/src/main/python/mllib/__pycache__/
Removing examples/src/main/python/sql/__pycache__/
Removing examples/src/main/python/sql/streaming/__pycache__/
Removing examples/src/main/python/streaming/__pycache__/
Removing examples/target/
Removing external/avro/target/
Removing external/docker-integration-tests/target/
Removing external/kafka-0-10-assembly/target/
Removing external/kafka-0-10-sql/target/
Removing external/kafka-0-10-token-provider/target/
Removing external/kafka-0-10/target/
Removing external/kinesis-asl-assembly/target/
Removing external/kinesis-asl/src/main/python/examples/streaming/__pycache__/
Removing external/kinesis-asl/target/
Removing external/spark-ganglia-lgpl/target/
Removing graphx/target/
Removing hadoop-cloud/target/
Removing launcher/target/
Removing lib/
Removing mllib-local/target/
Removing mllib/target/
Removing project/project/
Removing project/target/
Removing python/__pycache__/
Removing python/docs/__pycache__/
Removing python/docs/_build/
Removing python/pyspark/__pycache__/
Removing python/pyspark/ml/__pycache__/
Removing python/pyspark/ml/linalg/__pycache__/
Removing python/pyspark/ml/param/__pycache__/
Removing python/pyspark/ml/tests/__pycache__/
Removing python/pyspark/mllib/__pycache__/
Removing python/pyspark/mllib/linalg/__pycache__/
Removing python/pyspark/mllib/stat/__pycache__/
Removing python/pyspark/mllib/tests/__pycache__/
Removing python/pyspark/sql/__pycache__/
Removing python/pyspark/sql/avro/__pycache__/
Removing python/pyspark/sql/pandas/__pycache__/
Removing python/pyspark/sql/tests/__pycache__/
Removing python/pyspark/streaming/__pycache__/
Removing python/pyspark/streaming/tests/__pycache__/
Removing python/pyspark/testing/__pycache__/
Removing python/pyspark/tests/__pycache__/
Removing python/test_coverage/__pycache__/
Removing python/test_support/__pycache__/
Removing repl/target/
Removing resource-managers/kubernetes/core/target/
Removing resource-managers/kubernetes/integration-tests/target/
Removing resource-managers/kubernetes/integration-tests/tests/__pycache__/
Removing resource-managers/mesos/target/
Removing resource-managers/yarn/target/
Removing scalastyle-on-compile.generated.xml
Removing sql/__pycache__/
Removing sql/catalyst/target/
Removing sql/core/target/
Removing sql/hive-thriftserver/target/
Removing sql/hive/src/test/resources/__pycache__/
Removing sql/hive/src/test/resources/data/scripts/__pycache__/
Removing sql/hive/target/
Removing streaming/target/
Removing target/
Removing tools/target/
+++ dirname /home/jenkins/workspace/spark-branch-3.0-test-sbt-hadoop-3.2-hive-2.3/R/install-dev.sh
++ cd /home/jenkins/workspace/spark-branch-3.0-test-sbt-hadoop-3.2-hive-2.3/R
++ pwd
+ FWDIR=/home/jenkins/workspace/spark-branch-3.0-test-sbt-hadoop-3.2-hive-2.3/R
+ LIB_DIR=/home/jenkins/workspace/spark-branch-3.0-test-sbt-hadoop-3.2-hive-2.3/R/lib
+ mkdir -p /home/jenkins/workspace/spark-branch-3.0-test-sbt-hadoop-3.2-hive-2.3/R/lib
+ pushd /home/jenkins/workspace/spark-branch-3.0-test-sbt-hadoop-3.2-hive-2.3/R
+ . /home/jenkins/workspace/spark-branch-3.0-test-sbt-hadoop-3.2-hive-2.3/R/find-r.sh
++ '[' -z '' ']'
++ '[' '!' -z '' ']'
+++ command -v R
++ '[' '!' /usr/bin/R ']'
++++ which R
+++ dirname /usr/bin/R
++ R_SCRIPT_PATH=/usr/bin
++ echo 'Using R_SCRIPT_PATH = /usr/bin'
Using R_SCRIPT_PATH = /usr/bin
+ . /home/jenkins/workspace/spark-branch-3.0-test-sbt-hadoop-3.2-hive-2.3/R/create-rd.sh
++ set -o pipefail
++ set -e
++++ dirname /home/jenkins/workspace/spark-branch-3.0-test-sbt-hadoop-3.2-hive-2.3/R/create-rd.sh
+++ cd /home/jenkins/workspace/spark-branch-3.0-test-sbt-hadoop-3.2-hive-2.3/R
+++ pwd
++ FWDIR=/home/jenkins/workspace/spark-branch-3.0-test-sbt-hadoop-3.2-hive-2.3/R
++ pushd /home/jenkins/workspace/spark-branch-3.0-test-sbt-hadoop-3.2-hive-2.3/R
++ . /home/jenkins/workspace/spark-branch-3.0-test-sbt-hadoop-3.2-hive-2.3/R/find-r.sh
+++ '[' -z /usr/bin ']'
++ /usr/bin/Rscript -e ' if("devtools" %in% rownames(installed.packages())) { library(devtools); setwd("/home/jenkins/workspace/spark-branch-3.0-test-sbt-hadoop-3.2-hive-2.3/R"); devtools::document(pkg="./pkg", roclets=c("rd")) }'
Loading required package: usethis
Updating SparkR documentation
First time using roxygen2. Upgrading automatically...
Updating roxygen version in /home/jenkins/workspace/spark-branch-3.0-test-sbt-hadoop-3.2-hive-2.3/R/pkg/DESCRIPTION
Loading SparkR
Creating a new generic function for ‘as.data.frame’ in package ‘SparkR’
Creating a new generic function for ‘colnames’ in package ‘SparkR’
Creating a new generic function for ‘colnames<-’ in package ‘SparkR’
Creating a new generic function for ‘cov’ in package ‘SparkR’
Creating a new generic function for ‘drop’ in package ‘SparkR’
Creating a new generic function for ‘na.omit’ in package ‘SparkR’
Creating a new generic function for ‘filter’ in package ‘SparkR’
Creating a new generic function for ‘intersect’ in package ‘SparkR’
Creating a new generic function for ‘sample’ in package ‘SparkR’
Creating a new generic function for ‘transform’ in package ‘SparkR’
Creating a new generic function for ‘subset’ in package ‘SparkR’
Creating a new generic function for ‘summary’ in package ‘SparkR’
Creating a new generic function for ‘union’ in package ‘SparkR’
Creating a new generic function for ‘endsWith’ in package ‘SparkR’
Creating a new generic function for ‘startsWith’ in package ‘SparkR’
Creating a new generic function for ‘lag’ in package ‘SparkR’
Creating a new generic function for ‘rank’ in package ‘SparkR’
Creating a new generic function for ‘sd’ in package ‘SparkR’
Creating a new generic function for ‘var’ in package ‘SparkR’
Creating a new generic function for ‘window’ in package ‘SparkR’
Creating a new generic function for ‘predict’ in package ‘SparkR’
Creating a new generic function for ‘rbind’ in package ‘SparkR’
Creating a generic function for ‘substr’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘%in%’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘lapply’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘Filter’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘nrow’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘ncol’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘factorial’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘atan2’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘ifelse’ from package ‘base’ in package ‘SparkR’
Writing structType.Rd
Writing print.structType.Rd
Writing structField.Rd
Writing print.structField.Rd
Writing summarize.Rd
Writing alias.Rd
Writing arrange.Rd
Writing as.data.frame.Rd
Writing cache.Rd
Writing checkpoint.Rd
Writing coalesce.Rd
Writing collect.Rd
Writing columns.Rd
Writing coltypes.Rd
Writing count.Rd
Writing cov.Rd
Writing corr.Rd
Writing createOrReplaceTempView.Rd
Writing cube.Rd
Writing dapply.Rd
Writing dapplyCollect.Rd
Writing gapply.Rd
Writing gapplyCollect.Rd
Writing describe.Rd
Writing distinct.Rd
Writing drop.Rd
Writing dropDuplicates.Rd
Writing nafunctions.Rd
Writing dtypes.Rd
Writing explain.Rd
Writing except.Rd
Writing exceptAll.Rd
Writing filter.Rd
Writing first.Rd
Writing groupBy.Rd
Writing hint.Rd
Writing insertInto.Rd
Writing intersect.Rd
Writing intersectAll.Rd
Writing isLocal.Rd
Writing isStreaming.Rd
Writing limit.Rd
Writing localCheckpoint.Rd
Writing merge.Rd
Writing mutate.Rd
Writing orderBy.Rd
Writing persist.Rd
Writing printSchema.Rd
Writing registerTempTable-deprecated.Rd
Writing rename.Rd
Writing repartition.Rd
Writing repartitionByRange.Rd
Writing sample.Rd
Writing rollup.Rd
Writing sampleBy.Rd
Writing saveAsTable.Rd
Writing take.Rd
Writing write.df.Rd
Writing write.jdbc.Rd
Writing write.json.Rd
Writing write.orc.Rd
Writing write.parquet.Rd
Writing write.stream.Rd
Writing write.text.Rd
Writing schema.Rd
Writing select.Rd
Writing selectExpr.Rd
Writing showDF.Rd
Writing subset.Rd
Writing summary.Rd
Writing union.Rd
Writing unionAll.Rd
Writing unionByName.Rd
Writing unpersist.Rd
Writing with.Rd
Writing withColumn.Rd
Writing withWatermark.Rd
Writing randomSplit.Rd
Writing broadcast.Rd
Writing columnfunctions.Rd
Writing between.Rd
Writing cast.Rd
Writing endsWith.Rd
Writing startsWith.Rd
Writing column_nonaggregate_functions.Rd
Writing otherwise.Rd
Writing over.Rd
Writing eq_null_safe.Rd
Writing partitionBy.Rd
Writing rowsBetween.Rd
Writing rangeBetween.Rd
Writing windowPartitionBy.Rd
Writing windowOrderBy.Rd
Writing column_datetime_diff_functions.Rd
Writing column_aggregate_functions.Rd
Writing column_collection_functions.Rd
Writing column_string_functions.Rd
Writing avg.Rd
Writing column_math_functions.Rd
Writing column.Rd
Writing column_misc_functions.Rd
Writing column_window_functions.Rd
Writing column_datetime_functions.Rd
Writing last.Rd
Writing not.Rd
Writing fitted.Rd
Writing predict.Rd
Writing rbind.Rd
Writing spark.als.Rd
Writing spark.bisectingKmeans.Rd
Writing spark.gaussianMixture.Rd
Writing spark.gbt.Rd
Writing spark.glm.Rd
Writing spark.isoreg.Rd
Writing spark.kmeans.Rd
Writing spark.kstest.Rd
Writing spark.lda.Rd
Writing spark.logit.Rd
Writing spark.mlp.Rd
Writing spark.naiveBayes.Rd
Writing spark.decisionTree.Rd
Writing spark.randomForest.Rd
Writing spark.survreg.Rd
Writing spark.svmLinear.Rd
Writing spark.fpGrowth.Rd
Writing spark.prefixSpan.Rd
Writing spark.powerIterationClustering.Rd
Writing write.ml.Rd
Writing awaitTermination.Rd
Writing isActive.Rd
Writing lastProgress.Rd
Writing queryName.Rd
Writing status.Rd
Writing stopQuery.Rd
Writing print.jobj.Rd
Writing show.Rd
Writing substr.Rd
Writing match.Rd
Writing GroupedData.Rd
Writing pivot.Rd
Writing SparkDataFrame.Rd
Writing storageLevel.Rd
Writing toJSON.Rd
Writing nrow.Rd
Writing ncol.Rd
Writing dim.Rd
Writing head.Rd
Writing join.Rd
Writing crossJoin.Rd
Writing attach.Rd
Writing str.Rd
Writing histogram.Rd
Writing getNumPartitions.Rd
Writing sparkR.conf.Rd
Writing sparkR.version.Rd
Writing createDataFrame.Rd
Writing read.json.Rd
Writing read.orc.Rd
Writing read.parquet.Rd
Writing read.text.Rd
Writing sql.Rd
Writing tableToDF.Rd
Writing read.df.Rd
Writing read.jdbc.Rd
Writing read.stream.Rd
Writing WindowSpec.Rd
Writing createExternalTable-deprecated.Rd
Writing createTable.Rd
Writing cacheTable.Rd
Writing uncacheTable.Rd
Writing clearCache.Rd
Writing dropTempTable-deprecated.Rd
Writing dropTempView.Rd
Writing tables.Rd
Writing tableNames.Rd
Writing currentDatabase.Rd
Writing setCurrentDatabase.Rd
Writing listDatabases.Rd
Writing listTables.Rd
Writing listColumns.Rd
Writing listFunctions.Rd
Writing recoverPartitions.Rd
Writing refreshTable.Rd
Writing refreshByPath.Rd
Writing spark.addFile.Rd
Writing spark.getSparkFilesRootDirectory.Rd
Writing spark.getSparkFiles.Rd
Writing spark.lapply.Rd
Writing setLogLevel.Rd
Writing setCheckpointDir.Rd
Writing install.spark.Rd
Writing sparkR.callJMethod.Rd
Writing sparkR.callJStatic.Rd
Writing sparkR.newJObject.Rd
Writing LinearSVCModel-class.Rd
Writing LogisticRegressionModel-class.Rd
Writing MultilayerPerceptronClassificationModel-class.Rd
Writing NaiveBayesModel-class.Rd
Writing BisectingKMeansModel-class.Rd
Writing GaussianMixtureModel-class.Rd
Writing KMeansModel-class.Rd
Writing LDAModel-class.Rd
Writing PowerIterationClustering-class.Rd
Writing FPGrowthModel-class.Rd
Writing PrefixSpan-class.Rd
Writing ALSModel-class.Rd
Writing AFTSurvivalRegressionModel-class.Rd
Writing GeneralizedLinearRegressionModel-class.Rd
Writing IsotonicRegressionModel-class.Rd
Writing glm.Rd
Writing KSTest-class.Rd
Writing GBTRegressionModel-class.Rd
Writing GBTClassificationModel-class.Rd
Writing RandomForestRegressionModel-class.Rd
Writing RandomForestClassificationModel-class.Rd
Writing DecisionTreeRegressionModel-class.Rd
Writing DecisionTreeClassificationModel-class.Rd
Writing read.ml.Rd
Writing sparkR.session.stop.Rd
Writing sparkR.init-deprecated.Rd
Writing sparkRSQL.init-deprecated.Rd
Writing sparkRHive.init-deprecated.Rd
Writing sparkR.session.Rd
Writing sparkR.uiWebUrl.Rd
Writing setJobGroup.Rd
Writing clearJobGroup.Rd
Writing cancelJobGroup.Rd
Writing setJobDescription.Rd
Writing setLocalProperty.Rd
Writing getLocalProperty.Rd
Writing crosstab.Rd
Writing freqItems.Rd
Writing approxQuantile.Rd
Writing StreamingQuery.Rd
Writing hashCode.Rd
+ /usr/bin/R CMD INSTALL --library=/home/jenkins/workspace/spark-branch-3.0-test-sbt-hadoop-3.2-hive-2.3/R/lib /home/jenkins/workspace/spark-branch-3.0-test-sbt-hadoop-3.2-hive-2.3/R/pkg/
* installing *source* package ‘SparkR’ ...
** using staged installation
** R
** inst
** byte-compile and prepare package for lazy loading
Creating a new generic function for ‘as.data.frame’ in package ‘SparkR’
Creating a new generic function for ‘colnames’ in package ‘SparkR’
Creating a new generic function for ‘colnames<-’ in package ‘SparkR’
Creating a new generic function for ‘cov’ in package ‘SparkR’
Creating a new generic function for ‘drop’ in package ‘SparkR’
Creating a new generic function for ‘na.omit’ in package ‘SparkR’
Creating a new generic function for ‘filter’ in package ‘SparkR’
Creating a new generic function for ‘intersect’ in package ‘SparkR’
Creating a new generic function for ‘sample’ in package ‘SparkR’
Creating a new generic function for ‘transform’ in package ‘SparkR’
Creating a new generic function for ‘subset’ in package ‘SparkR’
Creating a new generic function for ‘summary’ in package ‘SparkR’
Creating a new generic function for ‘union’ in package ‘SparkR’
Creating a new generic function for ‘endsWith’ in package ‘SparkR’
Creating a new generic function for ‘startsWith’ in package ‘SparkR’
Creating a new generic function for ‘lag’ in package ‘SparkR’
Creating a new generic function for ‘rank’ in package ‘SparkR’
Creating a new generic function for ‘sd’ in package ‘SparkR’
Creating a new generic function for ‘var’ in package ‘SparkR’
Creating a new generic function for ‘window’ in package ‘SparkR’
Creating a new generic function for ‘predict’ in package ‘SparkR’
Creating a new generic function for ‘rbind’ in package ‘SparkR’
Creating a generic function for ‘substr’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘%in%’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘lapply’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘Filter’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘nrow’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘ncol’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘factorial’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘atan2’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘ifelse’ from package ‘base’ in package ‘SparkR’
** help
*** installing help indices
** building package indices
** installing vignettes
** testing if installed package can be loaded from temporary location
** testing if installed package can be loaded from final location
** testing if installed package keeps a record of temporary installation path
* DONE (SparkR)
+ cd /home/jenkins/workspace/spark-branch-3.0-test-sbt-hadoop-3.2-hive-2.3/R/lib
+ jar cfM /home/jenkins/workspace/spark-branch-3.0-test-sbt-hadoop-3.2-hive-2.3/R/lib/sparkr.zip SparkR
+ popd
[info] Using build tool sbt with Hadoop profile hadoop3.2 and Hive profile hive2.3 under environment amplab_jenkins
[info] Found the following changed modules: root
[info] Setup the following environment variables for tests: 

========================================================================
Running Apache RAT checks
========================================================================
Attempting to fetch rat
RAT checks passed.

========================================================================
Running Scala style checks
========================================================================
[info] Checking Scala style using SBT with these profiles:  -Phadoop-3.2 -Phive-2.3 -Pkinesis-asl -Phive -Phive-thriftserver -Pkubernetes -Pyarn -Pspark-ganglia-lgpl -Phadoop-cloud -Pmesos
Scalastyle checks passed.

========================================================================
Running Python style checks
========================================================================
starting python compilation test...
python compilation succeeded.

starting pycodestyle test...
pycodestyle checks passed.

starting flake8 test...
flake8 checks passed.

starting sphinx-build tests...
sphinx-build checks failed:
sphinx-build -b html -d _build/doctrees  -a -W . _build/html
Running Sphinx v4.1.1
making output directory... done

Exception occurred:
  File "/home/jenkins/workspace/spark-branch-3.0-test-sbt-hadoop-3.2-hive-2.3/python/docs/conf.py", line 338, in setup
    app.add_javascript('copybutton.js')
AttributeError: 'Sphinx' object has no attribute 'add_javascript'
The full traceback has been saved in /tmp/sphinx-err-7qdshymy.log, if you want to report the issue to the developers.
Please also report this if it was a user error, so that a better error message can be provided next time.
A bug report can be filed in the tracker at <https://github.com/sphinx-doc/sphinx/issues>. Thanks!
make: *** [Makefile:55: html] Error 2

re-running make html to print full warning list:
sphinx-build -b html -d _build/doctrees  -a . _build/html
Running Sphinx v4.1.1
making output directory... done

Exception occurred:
  File "/home/jenkins/workspace/spark-branch-3.0-test-sbt-hadoop-3.2-hive-2.3/python/docs/conf.py", line 338, in setup
    app.add_javascript('copybutton.js')
AttributeError: 'Sphinx' object has no attribute 'add_javascript'
The full traceback has been saved in /tmp/sphinx-err-3xlrpoh2.log, if you want to report the issue to the developers.
Please also report this if it was a user error, so that a better error message can be provided next time.
A bug report can be filed in the tracker at <https://github.com/sphinx-doc/sphinx/issues>. Thanks!
make: *** [Makefile:55: html] Error 2
[error] running /home/jenkins/workspace/spark-branch-3.0-test-sbt-hadoop-3.2-hive-2.3/dev/lint-python ; received return code 2
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files were found. Configuration error?
Finished: FAILURE