FailedConsole Output

[EnvInject] - Mask passwords passed as build parameters.
Started by user dongjoon
[EnvInject] - Loading node environment variables.
[EnvInject] - Preparing an environment for the build.
[EnvInject] - Keeping Jenkins system variables.
[EnvInject] - Keeping Jenkins build variables.
[EnvInject] - Injecting as environment variables the properties content 
PATH=/home/anaconda/bin:/home/jenkins/.cargo/bin:/home/anaconda/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/android-sdk/:/home/jenkins/.cargo/bin:/home/anaconda/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/android-sdk/:/usr/local/bin:/bin:/usr/bin
JAVA_HOME=/usr/java/jdk1.8.0_191
JAVA_7_HOME=/usr/java/jdk1.7.0_79
SPARK_BRANCH=branch-2.4
AMPLAB_JENKINS_BUILD_PROFILE=hadoop2.6
AMPLAB_JENKINS="true"
SPARK_TESTING=1
LANG=en_US.UTF-8
SPARK_MASTER_SBT_HADOOP_2_7=1

[EnvInject] - Variables injected successfully.
[EnvInject] - Injecting contributions.
Building remotely on amp-jenkins-worker-05 (centos spark-test) in workspace /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6
 > /home/jenkins/git2/bin/git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > /home/jenkins/git2/bin/git config remote.origin.url https://github.com/apache/spark.git # timeout=10
Fetching upstream changes from https://github.com/apache/spark.git
 > /home/jenkins/git2/bin/git --version # timeout=10
 > /home/jenkins/git2/bin/git fetch --tags --progress https://github.com/apache/spark.git +refs/heads/*:refs/remotes/origin/*
 > /home/jenkins/git2/bin/git rev-parse origin/branch-2.4^{commit} # timeout=10
Checking out Revision d5b903e38556ee3e8e1eb8f71a08e232afa4e36a (origin/branch-2.4)
 > /home/jenkins/git2/bin/git config core.sparsecheckout # timeout=10
 > /home/jenkins/git2/bin/git checkout -f d5b903e38556ee3e8e1eb8f71a08e232afa4e36a
 > /home/jenkins/git2/bin/git rev-list d5b903e38556ee3e8e1eb8f71a08e232afa4e36a # timeout=10
[spark-branch-2.4-test-sbt-hadoop-2.6] $ /bin/bash /tmp/hudson7150381371241550471.sh
Removing R/lib/
Removing R/pkg/man/
Removing assembly/target/
Removing build/apache-maven-3.5.4/
Removing build/sbt-launch-0.13.17.jar
Removing build/scala-2.11.12/
Removing build/zinc-0.3.15/
Removing common/kvstore/target/
Removing common/network-common/target/
Removing common/network-shuffle/target/
Removing common/network-yarn/target/
Removing common/sketch/target/
Removing common/tags/target/
Removing common/unsafe/target/
Removing core/derby.log
Removing core/dummy/
Removing core/ignored/
Removing core/metastore_db/
Removing core/target/
Removing derby.log
Removing dev/__pycache__/
Removing dev/create-release/__pycache__/
Removing dev/lint-r-report.log
Removing dev/pr-deps/
Removing dev/pycodestyle-2.4.0.py
Removing dev/sparktestsupport/__init__.pyc
Removing dev/sparktestsupport/__pycache__/
Removing dev/sparktestsupport/modules.pyc
Removing dev/sparktestsupport/shellutils.pyc
Removing dev/sparktestsupport/toposort.pyc
Removing dev/target/
Removing examples/src/main/python/__pycache__/
Removing examples/src/main/python/ml/__pycache__/
Removing examples/src/main/python/mllib/__pycache__/
Removing examples/src/main/python/sql/__pycache__/
Removing examples/src/main/python/sql/streaming/__pycache__/
Removing examples/src/main/python/streaming/__pycache__/
Removing examples/target/
Removing external/avro/spark-warehouse/
Removing external/avro/target/
Removing external/flume-assembly/target/
Removing external/flume-sink/target/
Removing external/flume/checkpoint/
Removing external/flume/target/
Removing external/kafka-0-10-assembly/target/
Removing external/kafka-0-10-sql/spark-warehouse/
Removing external/kafka-0-10-sql/target/
Removing external/kafka-0-10/target/
Removing external/kafka-0-8-assembly/target/
Removing external/kafka-0-8/target/
Removing external/kinesis-asl-assembly/target/
Removing external/kinesis-asl/checkpoint/
Removing external/kinesis-asl/src/main/python/examples/streaming/__pycache__/
Removing external/kinesis-asl/target/
Removing external/spark-ganglia-lgpl/target/
Removing graphx/target/
Removing launcher/target/
Removing lib/
Removing logs/
Removing metastore_db/
Removing mllib-local/target/
Removing mllib/checkpoint/
Removing mllib/spark-warehouse/
Removing mllib/target/
Removing project/project/
Removing project/target/
Removing python/__pycache__/
Removing python/docs/__pycache__/
Removing python/docs/_build/
Removing python/docs/epytext.pyc
Removing python/lib/pyspark.zip
Removing python/pyspark/__init__.pyc
Removing python/pyspark/__pycache__/
Removing python/pyspark/_globals.pyc
Removing python/pyspark/accumulators.pyc
Removing python/pyspark/broadcast.pyc
Removing python/pyspark/cloudpickle.pyc
Removing python/pyspark/conf.pyc
Removing python/pyspark/context.pyc
Removing python/pyspark/files.pyc
Removing python/pyspark/find_spark_home.pyc
Removing python/pyspark/heapq3.pyc
Removing python/pyspark/java_gateway.pyc
Removing python/pyspark/join.pyc
Removing python/pyspark/ml/__init__.pyc
Removing python/pyspark/ml/__pycache__/
Removing python/pyspark/ml/base.pyc
Removing python/pyspark/ml/classification.pyc
Removing python/pyspark/ml/clustering.pyc
Removing python/pyspark/ml/common.pyc
Removing python/pyspark/ml/evaluation.pyc
Removing python/pyspark/ml/feature.pyc
Removing python/pyspark/ml/fpm.pyc
Removing python/pyspark/ml/image.pyc
Removing python/pyspark/ml/linalg/__init__.pyc
Removing python/pyspark/ml/linalg/__pycache__/
Removing python/pyspark/ml/param/__init__.pyc
Removing python/pyspark/ml/param/__pycache__/
Removing python/pyspark/ml/param/shared.pyc
Removing python/pyspark/ml/pipeline.pyc
Removing python/pyspark/ml/recommendation.pyc
Removing python/pyspark/ml/regression.pyc
Removing python/pyspark/ml/stat.pyc
Removing python/pyspark/ml/tuning.pyc
Removing python/pyspark/ml/util.pyc
Removing python/pyspark/ml/wrapper.pyc
Removing python/pyspark/mllib/__init__.pyc
Removing python/pyspark/mllib/__pycache__/
Removing python/pyspark/mllib/classification.pyc
Removing python/pyspark/mllib/clustering.pyc
Removing python/pyspark/mllib/common.pyc
Removing python/pyspark/mllib/evaluation.pyc
Removing python/pyspark/mllib/feature.pyc
Removing python/pyspark/mllib/fpm.pyc
Removing python/pyspark/mllib/linalg/__init__.pyc
Removing python/pyspark/mllib/linalg/__pycache__/
Removing python/pyspark/mllib/linalg/distributed.pyc
Removing python/pyspark/mllib/random.pyc
Removing python/pyspark/mllib/recommendation.pyc
Removing python/pyspark/mllib/regression.pyc
Removing python/pyspark/mllib/stat/KernelDensity.pyc
Removing python/pyspark/mllib/stat/__init__.pyc
Removing python/pyspark/mllib/stat/__pycache__/
Removing python/pyspark/mllib/stat/_statistics.pyc
Removing python/pyspark/mllib/stat/distribution.pyc
Removing python/pyspark/mllib/stat/test.pyc
Removing python/pyspark/mllib/tree.pyc
Removing python/pyspark/mllib/util.pyc
Removing python/pyspark/profiler.pyc
Removing python/pyspark/rdd.pyc
Removing python/pyspark/rddsampler.pyc
Removing python/pyspark/resultiterable.pyc
Removing python/pyspark/serializers.pyc
Removing python/pyspark/shuffle.pyc
Removing python/pyspark/sql/__init__.pyc
Removing python/pyspark/sql/__pycache__/
Removing python/pyspark/sql/catalog.pyc
Removing python/pyspark/sql/column.pyc
Removing python/pyspark/sql/conf.pyc
Removing python/pyspark/sql/context.pyc
Removing python/pyspark/sql/dataframe.pyc
Removing python/pyspark/sql/functions.pyc
Removing python/pyspark/sql/group.pyc
Removing python/pyspark/sql/readwriter.pyc
Removing python/pyspark/sql/session.pyc
Removing python/pyspark/sql/streaming.pyc
Removing python/pyspark/sql/types.pyc
Removing python/pyspark/sql/udf.pyc
Removing python/pyspark/sql/utils.pyc
Removing python/pyspark/sql/window.pyc
Removing python/pyspark/statcounter.pyc
Removing python/pyspark/status.pyc
Removing python/pyspark/storagelevel.pyc
Removing python/pyspark/streaming/__init__.pyc
Removing python/pyspark/streaming/__pycache__/
Removing python/pyspark/streaming/context.pyc
Removing python/pyspark/streaming/dstream.pyc
Removing python/pyspark/streaming/flume.pyc
Removing python/pyspark/streaming/kafka.pyc
Removing python/pyspark/streaming/kinesis.pyc
Removing python/pyspark/streaming/listener.pyc
Removing python/pyspark/streaming/util.pyc
Removing python/pyspark/taskcontext.pyc
Removing python/pyspark/traceback_utils.pyc
Removing python/pyspark/util.pyc
Removing python/pyspark/version.pyc
Removing python/test_coverage/__pycache__/
Removing python/test_support/__pycache__/
Removing repl/spark-warehouse/
Removing repl/target/
Removing resource-managers/kubernetes/core/target/
Removing resource-managers/kubernetes/integration-tests/tests/__pycache__/
Removing resource-managers/mesos/target/
Removing resource-managers/yarn/target/
Removing scalastyle-on-compile.generated.xml
Removing spark-warehouse/
Removing sql/__pycache__/
Removing sql/catalyst/loc/
Removing sql/catalyst/target/
Removing sql/core/loc/
Removing sql/core/paris/
Removing sql/core/spark-warehouse/
Removing sql/core/target/
Removing sql/hive-thriftserver/derby.log
Removing sql/hive-thriftserver/metastore_db/
Removing sql/hive-thriftserver/spark-warehouse/
Removing sql/hive-thriftserver/target/
Removing sql/hive/derby.log
Removing sql/hive/loc/
Removing sql/hive/metastore_db/
Removing sql/hive/src/test/resources/data/scripts/__pycache__/
Removing sql/hive/target/
Removing streaming/checkpoint/
Removing streaming/target/
Removing target/
Removing tools/target/
Removing work/
+++ dirname /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/install-dev.sh
++ cd /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R
++ pwd
+ FWDIR=/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R
+ LIB_DIR=/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/lib
+ mkdir -p /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/lib
+ pushd /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R
+ . /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/find-r.sh
++ '[' -z '' ']'
++ '[' '!' -z '' ']'
+++ command -v R
++ '[' '!' /usr/bin/R ']'
++++ which R
+++ dirname /usr/bin/R
++ R_SCRIPT_PATH=/usr/bin
++ echo 'Using R_SCRIPT_PATH = /usr/bin'
Using R_SCRIPT_PATH = /usr/bin
+ . /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/create-rd.sh
++ set -o pipefail
++ set -e
++++ dirname /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/create-rd.sh
+++ cd /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R
+++ pwd
++ FWDIR=/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R
++ pushd /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R
++ . /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/find-r.sh
+++ '[' -z /usr/bin ']'
++ /usr/bin/Rscript -e ' if("devtools" %in% rownames(installed.packages())) { library(devtools); devtools::document(pkg="./pkg", roclets=c("rd")) }'
Updating SparkR documentation
Loading SparkR
Creating a new generic function for ���as.data.frame��� in package ���SparkR���
Creating a new generic function for ���colnames��� in package ���SparkR���
Creating a new generic function for ���colnames<-��� in package ���SparkR���
Creating a new generic function for ���cov��� in package ���SparkR���
Creating a new generic function for ���drop��� in package ���SparkR���
Creating a new generic function for ���na.omit��� in package ���SparkR���
Creating a new generic function for ���filter��� in package ���SparkR���
Creating a new generic function for ���intersect��� in package ���SparkR���
Creating a new generic function for ���sample��� in package ���SparkR���
Creating a new generic function for ���transform��� in package ���SparkR���
Creating a new generic function for ���subset��� in package ���SparkR���
Creating a new generic function for ���summary��� in package ���SparkR���
Creating a new generic function for ���union��� in package ���SparkR���
Creating a new generic function for ���endsWith��� in package ���SparkR���
Creating a new generic function for ���startsWith��� in package ���SparkR���
Creating a new generic function for ���lag��� in package ���SparkR���
Creating a new generic function for ���rank��� in package ���SparkR���
Creating a new generic function for ���sd��� in package ���SparkR���
Creating a new generic function for ���var��� in package ���SparkR���
Creating a new generic function for ���window��� in package ���SparkR���
Creating a new generic function for ���predict��� in package ���SparkR���
Creating a new generic function for ���rbind��� in package ���SparkR���
Creating a generic function for ���substr��� from package ���base��� in package ���SparkR���
Creating a generic function for ���%in%��� from package ���base��� in package ���SparkR���
Creating a generic function for ���lapply��� from package ���base��� in package ���SparkR���
Creating a generic function for ���Filter��� from package ���base��� in package ���SparkR���
Creating a generic function for ���nrow��� from package ���base��� in package ���SparkR���
Creating a generic function for ���ncol��� from package ���base��� in package ���SparkR���
Creating a generic function for ���factorial��� from package ���base��� in package ���SparkR���
Creating a generic function for ���atan2��� from package ���base��� in package ���SparkR���
Creating a generic function for ���ifelse��� from package ���base��� in package ���SparkR���
First time using roxygen2. Upgrading automatically...
Updating roxygen version in /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/pkg/DESCRIPTION
Warning: [/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/pkg/R/SQLContext.R:592] @name May only use one @name per block
Warning: [/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/pkg/R/SQLContext.R:733] @name May only use one @name per block
Writing structType.Rd
Writing print.structType.Rd
Writing structField.Rd
Writing print.structField.Rd
Writing summarize.Rd
Writing alias.Rd
Writing arrange.Rd
Writing as.data.frame.Rd
Writing cache.Rd
Writing checkpoint.Rd
Writing coalesce.Rd
Writing collect.Rd
Writing columns.Rd
Writing coltypes.Rd
Writing count.Rd
Writing cov.Rd
Writing corr.Rd
Writing createOrReplaceTempView.Rd
Writing cube.Rd
Writing dapply.Rd
Writing dapplyCollect.Rd
Writing gapply.Rd
Writing gapplyCollect.Rd
Writing describe.Rd
Writing distinct.Rd
Writing drop.Rd
Writing dropDuplicates.Rd
Writing nafunctions.Rd
Writing dtypes.Rd
Writing explain.Rd
Writing except.Rd
Writing exceptAll.Rd
Writing filter.Rd
Writing first.Rd
Writing groupBy.Rd
Writing hint.Rd
Writing insertInto.Rd
Writing intersect.Rd
Writing intersectAll.Rd
Writing isLocal.Rd
Writing isStreaming.Rd
Writing limit.Rd
Writing localCheckpoint.Rd
Writing merge.Rd
Writing mutate.Rd
Writing orderBy.Rd
Writing persist.Rd
Writing printSchema.Rd
Writing registerTempTable-deprecated.Rd
Writing rename.Rd
Writing repartition.Rd
Writing repartitionByRange.Rd
Writing sample.Rd
Writing rollup.Rd
Writing sampleBy.Rd
Writing saveAsTable.Rd
Writing take.Rd
Writing write.df.Rd
Writing write.jdbc.Rd
Writing write.json.Rd
Writing write.orc.Rd
Writing write.parquet.Rd
Writing write.stream.Rd
Writing write.text.Rd
Writing schema.Rd
Writing select.Rd
Writing selectExpr.Rd
Writing showDF.Rd
Writing subset.Rd
Writing summary.Rd
Writing union.Rd
Writing unionByName.Rd
Writing unpersist.Rd
Writing with.Rd
Writing withColumn.Rd
Writing withWatermark.Rd
Writing randomSplit.Rd
Writing broadcast.Rd
Writing columnfunctions.Rd
Writing between.Rd
Writing cast.Rd
Writing endsWith.Rd
Writing startsWith.Rd
Writing column_nonaggregate_functions.Rd
Writing otherwise.Rd
Writing over.Rd
Writing eq_null_safe.Rd
Writing partitionBy.Rd
Writing rowsBetween.Rd
Writing rangeBetween.Rd
Writing windowPartitionBy.Rd
Writing windowOrderBy.Rd
Writing column_datetime_diff_functions.Rd
Writing column_aggregate_functions.Rd
Writing column_collection_functions.Rd
Writing column_string_functions.Rd
Writing avg.Rd
Writing column_math_functions.Rd
Writing column.Rd
Writing column_misc_functions.Rd
Writing column_window_functions.Rd
Writing column_datetime_functions.Rd
Writing last.Rd
Writing not.Rd
Writing fitted.Rd
Writing predict.Rd
Writing rbind.Rd
Writing spark.als.Rd
Writing spark.bisectingKmeans.Rd
Writing spark.gaussianMixture.Rd
Writing spark.gbt.Rd
Writing spark.glm.Rd
Writing spark.isoreg.Rd
Writing spark.kmeans.Rd
Writing spark.kstest.Rd
Writing spark.lda.Rd
Writing spark.logit.Rd
Writing spark.mlp.Rd
Writing spark.naiveBayes.Rd
Writing spark.decisionTree.Rd
Writing spark.randomForest.Rd
Writing spark.survreg.Rd
Writing spark.svmLinear.Rd
Writing spark.fpGrowth.Rd
Writing write.ml.Rd
Writing awaitTermination.Rd
Writing isActive.Rd
Writing lastProgress.Rd
Writing queryName.Rd
Writing status.Rd
Writing stopQuery.Rd
Writing print.jobj.Rd
Writing show.Rd
Writing substr.Rd
Writing match.Rd
Writing GroupedData.Rd
Writing pivot.Rd
Writing SparkDataFrame.Rd
Writing storageLevel.Rd
Writing toJSON.Rd
Writing nrow.Rd
Writing ncol.Rd
Writing dim.Rd
Writing head.Rd
Writing join.Rd
Writing crossJoin.Rd
Writing attach.Rd
Writing str.Rd
Writing histogram.Rd
Writing getNumPartitions.Rd
Writing sparkR.conf.Rd
Writing sparkR.version.Rd
Writing createDataFrame.Rd
Writing read.json.Rd
Writing read.orc.Rd
Writing read.parquet.Rd
Writing read.text.Rd
Writing sql.Rd
Writing tableToDF.Rd
Writing read.df.Rd
Writing read.jdbc.Rd
Writing read.stream.Rd
Writing WindowSpec.Rd
Writing createExternalTable-deprecated.Rd
Writing createTable.Rd
Writing cacheTable.Rd
Writing uncacheTable.Rd
Writing clearCache.Rd
Writing dropTempTable-deprecated.Rd
Writing dropTempView.Rd
Writing tables.Rd
Writing tableNames.Rd
Writing currentDatabase.Rd
Writing setCurrentDatabase.Rd
Writing listDatabases.Rd
Writing listTables.Rd
Writing listColumns.Rd
Writing listFunctions.Rd
Writing recoverPartitions.Rd
Writing refreshTable.Rd
Writing refreshByPath.Rd
Writing spark.addFile.Rd
Writing spark.getSparkFilesRootDirectory.Rd
Writing spark.getSparkFiles.Rd
Writing spark.lapply.Rd
Writing setLogLevel.Rd
Writing setCheckpointDir.Rd
Writing install.spark.Rd
Writing sparkR.callJMethod.Rd
Writing sparkR.callJStatic.Rd
Writing sparkR.newJObject.Rd
Writing LinearSVCModel-class.Rd
Writing LogisticRegressionModel-class.Rd
Writing MultilayerPerceptronClassificationModel-class.Rd
Writing NaiveBayesModel-class.Rd
Writing BisectingKMeansModel-class.Rd
Writing GaussianMixtureModel-class.Rd
Writing KMeansModel-class.Rd
Writing LDAModel-class.Rd
Writing FPGrowthModel-class.Rd
Writing ALSModel-class.Rd
Writing AFTSurvivalRegressionModel-class.Rd
Writing GeneralizedLinearRegressionModel-class.Rd
Writing IsotonicRegressionModel-class.Rd
Writing glm.Rd
Writing KSTest-class.Rd
Writing GBTRegressionModel-class.Rd
Writing GBTClassificationModel-class.Rd
Writing RandomForestRegressionModel-class.Rd
Writing RandomForestClassificationModel-class.Rd
Writing DecisionTreeRegressionModel-class.Rd
Writing DecisionTreeClassificationModel-class.Rd
Writing read.ml.Rd
Writing sparkR.session.stop.Rd
Writing sparkR.init-deprecated.Rd
Writing sparkRSQL.init-deprecated.Rd
Writing sparkRHive.init-deprecated.Rd
Writing sparkR.session.Rd
Writing sparkR.uiWebUrl.Rd
Writing setJobGroup.Rd
Writing clearJobGroup.Rd
Writing cancelJobGroup.Rd
Writing setJobDescription.Rd
Writing setLocalProperty.Rd
Writing getLocalProperty.Rd
Writing crosstab.Rd
Writing freqItems.Rd
Writing approxQuantile.Rd
Writing StreamingQuery.Rd
Writing hashCode.Rd
+ /usr/bin/R CMD INSTALL --library=/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/lib /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/pkg/
* installing *source* package ���SparkR��� ...
** R
** inst
** byte-compile and prepare package for lazy loading
Creating a new generic function for ���as.data.frame��� in package ���SparkR���
Creating a new generic function for ���colnames��� in package ���SparkR���
Creating a new generic function for ���colnames<-��� in package ���SparkR���
Creating a new generic function for ���cov��� in package ���SparkR���
Creating a new generic function for ���drop��� in package ���SparkR���
Creating a new generic function for ���na.omit��� in package ���SparkR���
Creating a new generic function for ���filter��� in package ���SparkR���
Creating a new generic function for ���intersect��� in package ���SparkR���
Creating a new generic function for ���sample��� in package ���SparkR���
Creating a new generic function for ���transform��� in package ���SparkR���
Creating a new generic function for ���subset��� in package ���SparkR���
Creating a new generic function for ���summary��� in package ���SparkR���
Creating a new generic function for ���union��� in package ���SparkR���
Creating a new generic function for ���endsWith��� in package ���SparkR���
Creating a new generic function for ���startsWith��� in package ���SparkR���
Creating a new generic function for ���lag��� in package ���SparkR���
Creating a new generic function for ���rank��� in package ���SparkR���
Creating a new generic function for ���sd��� in package ���SparkR���
Creating a new generic function for ���var��� in package ���SparkR���
Creating a new generic function for ���window��� in package ���SparkR���
Creating a new generic function for ���predict��� in package ���SparkR���
Creating a new generic function for ���rbind��� in package ���SparkR���
Creating a generic function for ���substr��� from package ���base��� in package ���SparkR���
Creating a generic function for ���%in%��� from package ���base��� in package ���SparkR���
Creating a generic function for ���lapply��� from package ���base��� in package ���SparkR���
Creating a generic function for ���Filter��� from package ���base��� in package ���SparkR���
Creating a generic function for ���nrow��� from package ���base��� in package ���SparkR���
Creating a generic function for ���ncol��� from package ���base��� in package ���SparkR���
Creating a generic function for ���factorial��� from package ���base��� in package ���SparkR���
Creating a generic function for ���atan2��� from package ���base��� in package ���SparkR���
Creating a generic function for ���ifelse��� from package ���base��� in package ���SparkR���
** help
*** installing help indices
  converting help for package ���SparkR���
    finding HTML links ... done
    AFTSurvivalRegressionModel-class        html  
    ALSModel-class                          html  
    BisectingKMeansModel-class              html  
    DecisionTreeClassificationModel-class   html  
    DecisionTreeRegressionModel-class       html  
    FPGrowthModel-class                     html  
    GBTClassificationModel-class            html  
    GBTRegressionModel-class                html  
    GaussianMixtureModel-class              html  
    GeneralizedLinearRegressionModel-class
                                            html  
    GroupedData                             html  
    IsotonicRegressionModel-class           html  
    KMeansModel-class                       html  
    KSTest-class                            html  
    LDAModel-class                          html  
    LinearSVCModel-class                    html  
    LogisticRegressionModel-class           html  
    MultilayerPerceptronClassificationModel-class
                                            html  
    NaiveBayesModel-class                   html  
    RandomForestClassificationModel-class   html  
    RandomForestRegressionModel-class       html  
    SparkDataFrame                          html  
    StreamingQuery                          html  
    WindowSpec                              html  
    alias                                   html  
    approxQuantile                          html  
    arrange                                 html  
    as.data.frame                           html  
    attach                                  html  
    avg                                     html  
    awaitTermination                        html  
    between                                 html  
    broadcast                               html  
    cache                                   html  
    cacheTable                              html  
    cancelJobGroup                          html  
    cast                                    html  
    checkpoint                              html  
    clearCache                              html  
    clearJobGroup                           html  
    coalesce                                html  
    collect                                 html  
    coltypes                                html  
    column                                  html  
    column_aggregate_functions              html  
    column_collection_functions             html  
    column_datetime_diff_functions          html  
    column_datetime_functions               html  
    column_math_functions                   html  
    column_misc_functions                   html  
    column_nonaggregate_functions           html  
    column_string_functions                 html  
    column_window_functions                 html  
    columnfunctions                         html  
    columns                                 html  
    corr                                    html  
    count                                   html  
    cov                                     html  
    createDataFrame                         html  
    createExternalTable-deprecated          html  
    createOrReplaceTempView                 html  
    createTable                             html  
    crossJoin                               html  
    crosstab                                html  
    cube                                    html  
    currentDatabase                         html  
    dapply                                  html  
    dapplyCollect                           html  
    describe                                html  
    dim                                     html  
    distinct                                html  
    drop                                    html  
    dropDuplicates                          html  
    dropTempTable-deprecated                html  
    dropTempView                            html  
    dtypes                                  html  
    endsWith                                html  
    eq_null_safe                            html  
    except                                  html  
    exceptAll                               html  
    explain                                 html  
    filter                                  html  
    first                                   html  
    fitted                                  html  
    freqItems                               html  
    gapply                                  html  
    gapplyCollect                           html  
    getLocalProperty                        html  
    getNumPartitions                        html  
    glm                                     html  
    groupBy                                 html  
    hashCode                                html  
    head                                    html  
    hint                                    html  
    histogram                               html  
    insertInto                              html  
    install.spark                           html  
    intersect                               html  
    intersectAll                            html  
    isActive                                html  
    isLocal                                 html  
    isStreaming                             html  
    join                                    html  
    last                                    html  
    lastProgress                            html  
    limit                                   html  
    listColumns                             html  
    listDatabases                           html  
    listFunctions                           html  
    listTables                              html  
    localCheckpoint                         html  
    match                                   html  
    merge                                   html  
    mutate                                  html  
    nafunctions                             html  
    ncol                                    html  
    not                                     html  
    nrow                                    html  
    orderBy                                 html  
    otherwise                               html  
    over                                    html  
    partitionBy                             html  
    persist                                 html  
    pivot                                   html  
    predict                                 html  
    print.jobj                              html  
    print.structField                       html  
    print.structType                        html  
    printSchema                             html  
    queryName                               html  
    randomSplit                             html  
    rangeBetween                            html  
    rbind                                   html  
    read.df                                 html  
    read.jdbc                               html  
    read.json                               html  
    read.ml                                 html  
    read.orc                                html  
    read.parquet                            html  
    read.stream                             html  
    read.text                               html  
    recoverPartitions                       html  
    refreshByPath                           html  
    refreshTable                            html  
    registerTempTable-deprecated            html  
    rename                                  html  
    repartition                             html  
    repartitionByRange                      html  
    rollup                                  html  
    rowsBetween                             html  
    sample                                  html  
    sampleBy                                html  
    saveAsTable                             html  
    schema                                  html  
    select                                  html  
    selectExpr                              html  
    setCheckpointDir                        html  
    setCurrentDatabase                      html  
    setJobDescription                       html  
    setJobGroup                             html  
    setLocalProperty                        html  
    setLogLevel                             html  
    show                                    html  
    showDF                                  html  
    spark.addFile                           html  
    spark.als                               html  
    spark.bisectingKmeans                   html  
    spark.decisionTree                      html  
    spark.fpGrowth                          html  
    spark.gaussianMixture                   html  
    spark.gbt                               html  
    spark.getSparkFiles                     html  
    spark.getSparkFilesRootDirectory        html  
    spark.glm                               html  
    spark.isoreg                            html  
    spark.kmeans                            html  
    spark.kstest                            html  
    spark.lapply                            html  
    spark.lda                               html  
    spark.logit                             html  
    spark.mlp                               html  
    spark.naiveBayes                        html  
    spark.randomForest                      html  
    spark.survreg                           html  
    spark.svmLinear                         html  
    sparkR.callJMethod                      html  
    sparkR.callJStatic                      html  
    sparkR.conf                             html  
    sparkR.init-deprecated                  html  
    sparkR.newJObject                       html  
    sparkR.session                          html  
    sparkR.session.stop                     html  
    sparkR.uiWebUrl                         html  
    sparkR.version                          html  
    sparkRHive.init-deprecated              html  
    sparkRSQL.init-deprecated               html  
    sql                                     html  
    startsWith                              html  
    status                                  html  
    stopQuery                               html  
    storageLevel                            html  
    str                                     html  
    structField                             html  
    structType                              html  
    subset                                  html  
    substr                                  html  
    summarize                               html  
    summary                                 html  
    tableNames                              html  
    tableToDF                               html  
    tables                                  html  
    take                                    html  
    toJSON                                  html  
    uncacheTable                            html  
    union                                   html  
    unionByName                             html  
    unpersist                               html  
    windowOrderBy                           html  
    windowPartitionBy                       html  
    with                                    html  
    withColumn                              html  
    withWatermark                           html  
    write.df                                html  
    write.jdbc                              html  
    write.json                              html  
    write.ml                                html  
    write.orc                               html  
    write.parquet                           html  
    write.stream                            html  
    write.text                              html  
** building package indices
** installing vignettes
** testing if installed package can be loaded
* DONE (SparkR)
+ cd /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/lib
+ jar cfM /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/lib/sparkr.zip SparkR
+ popd
[info] Using build tool sbt with Hadoop profile hadoop2.6 under environment amplab_jenkins
[info] Found the following changed modules: root
[info] Setup the following environment variables for tests: 

========================================================================
Running Apache RAT checks
========================================================================
Attempting to fetch rat
RAT checks passed.

========================================================================
Running Scala style checks
========================================================================
Scalastyle checks passed.

========================================================================
Running Python style checks
========================================================================
pycodestyle checks passed.
rm -rf _build/*
pydoc checks passed.

========================================================================
Running R style checks
========================================================================

Attaching package: ���SparkR���

The following objects are masked from ���package:stats���:

    cov, filter, lag, na.omit, predict, sd, var, window

The following objects are masked from ���package:base���:

    as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
    rank, rbind, sample, startsWith, subset, summary, transform, union


Attaching package: ���testthat���

The following objects are masked from ���package:SparkR���:

    describe, not

lintr checks passed.

========================================================================
Running build tests
========================================================================
Using `mvn` from path: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/build/apache-maven-3.5.4/bin/mvn
Using `mvn` from path: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/build/apache-maven-3.5.4/bin/mvn
Performing Maven install for hadoop-2.6
Using `mvn` from path: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/build/apache-maven-3.5.4/bin/mvn
Performing Maven validate for hadoop-2.6
Using `mvn` from path: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/build/apache-maven-3.5.4/bin/mvn
Generating dependency manifest for hadoop-2.6
Using `mvn` from path: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/build/apache-maven-3.5.4/bin/mvn
Performing Maven install for hadoop-2.7
Using `mvn` from path: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/build/apache-maven-3.5.4/bin/mvn
Performing Maven validate for hadoop-2.7
Using `mvn` from path: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/build/apache-maven-3.5.4/bin/mvn
Generating dependency manifest for hadoop-2.7
Using `mvn` from path: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/build/apache-maven-3.5.4/bin/mvn
Performing Maven install for hadoop-3.1
Using `mvn` from path: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/build/apache-maven-3.5.4/bin/mvn
Performing Maven validate for hadoop-3.1
Using `mvn` from path: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/build/apache-maven-3.5.4/bin/mvn
Generating dependency manifest for hadoop-3.1
Using `mvn` from path: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/build/apache-maven-3.5.4/bin/mvn
Using `mvn` from path: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/build/apache-maven-3.5.4/bin/mvn

========================================================================
Building Spark
========================================================================
[info] Building Spark (w/Hive 1.2.1) using SBT with these arguments:  -Phadoop-2.6 -Pkubernetes -Phive-thriftserver -Pflume -Pkinesis-asl -Pyarn -Pkafka-0-8 -Pspark-ganglia-lgpl -Phive -Pmesos test:package streaming-kafka-0-8-assembly/assembly streaming-flume-assembly/assembly streaming-kinesis-asl-assembly/assembly
Using /usr/java/jdk1.8.0_191 as default JAVA_HOME.
Note, this will be overridden by -java-home if it is set.
[info] Loading project definition from /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/project
[info] Set current project to spark-parent (in build file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/)
[info] Avro compiler using stringType=CharSequence
[info] Compiling Avro IDL /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-sink/src/main/avro/sparkflume.avdl
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}tags...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}spark...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}tools...
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-assembly/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/kvstore/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/unsafe/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-common/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/launcher/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-shuffle/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-assembly/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/tags/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-yarn/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8-assembly/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-yarn/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/assembly/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-shuffle/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl-assembly/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/examples/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-common/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/kvstore/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/launcher/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/assembly/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl-assembly/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-assembly/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-assembly/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/tools/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/spark-ganglia-lgpl/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8-assembly/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/tags/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/spark-ganglia-lgpl/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/tools/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-sink/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/unsafe/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/target
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-sink/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib-local/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/graphx/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/kubernetes/core/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/kubernetes/core/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/yarn/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib-local/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/graphx/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/yarn/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive/target
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scala-lang/scala-compiler/2.11.12/scala-compiler-2.11.12.jar ...
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/examples/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/streaming/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/streaming/target
[info] 	[SUCCESSFUL ] org.scala-lang#scala-compiler;2.11.12!scala-compiler.jar (1488ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scala-lang/scala-library/2.11.12/scala-library-2.11.12.jar ...
[info] 	[SUCCESSFUL ] org.scala-lang#scala-library;2.11.12!scala-library.jar (1014ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scala-lang/scala-reflect/2.11.12/scala-reflect-2.11.12.jar ...
[info] 	[SUCCESSFUL ] org.scala-lang#scala-reflect;2.11.12!scala-reflect.jar (756ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/clapper/classutil_2.11/1.1.2/classutil_2.11-1.1.2.jar ...
[info] 	[SUCCESSFUL ] org.clapper#classutil_2.11;1.1.2!classutil_2.11.jar (370ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scala-lang/modules/scala-xml_2.11/1.0.5/scala-xml_2.11-1.0.5.jar ...
[info] 	[SUCCESSFUL ] org.scala-lang.modules#scala-xml_2.11;1.0.5!scala-xml_2.11.jar(bundle) (418ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scala-lang/modules/scala-parser-combinators_2.11/1.0.4/scala-parser-combinators_2.11-1.0.4.jar ...
[info] 	[SUCCESSFUL ] org.scala-lang.modules#scala-parser-combinators_2.11;1.0.4!scala-parser-combinators_2.11.jar(bundle) (405ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/ow2/asm/asm-util/5.1/asm-util-5.1.jar ...
[info] 	[SUCCESSFUL ] org.ow2.asm#asm-util;5.1!asm-util.jar (328ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/clapper/grizzled-scala_2.11/4.2.0/grizzled-scala_2.11-4.2.0.jar ...
[info] 	[SUCCESSFUL ] org.clapper#grizzled-scala_2.11;4.2.0!grizzled-scala_2.11.jar (444ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scalatest/scalatest_2.11/3.0.3/scalatest_2.11-3.0.3.jar ...
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive/target
[info] 	[SUCCESSFUL ] org.scalatest#scalatest_2.11;3.0.3!scalatest_2.11.jar(bundle) (1000ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scalactic/scalactic_2.11/3.0.3/scalactic_2.11-3.0.3.jar ...
[info] 	[SUCCESSFUL ] org.scalactic#scalactic_2.11;3.0.3!scalactic_2.11.jar(bundle) (508ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/com/typesafe/genjavadoc/genjavadoc-plugin_2.11.12/0.14/genjavadoc-plugin_2.11.12-0.14.jar ...
[info] 	[SUCCESSFUL ] com.typesafe.genjavadoc#genjavadoc-plugin_2.11.12;0.14!genjavadoc-plugin_2.11.12.jar (392ms)
[info] Done updating.
[info] Compiling 1 Scala source to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/tools/target/scala-2.11/classes...
[info] Done updating.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/target/scala-2.11/spark-parent_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/target/scala-2.11/spark-parent_2.11-2.4.7-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Done updating.
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}network-common...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}mllib-local...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}unsafe...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-flume-sink...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}kvstore...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}launcher...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}sketch...
[info] Compiling 2 Scala sources and 6 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/tags/target/scala-2.11/classes...
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/com/fasterxml/jackson/core/jackson-databind/2.6.7.3/jackson-databind-2.6.7.3.jar ...
[info] 	[SUCCESSFUL ] com.fasterxml.jackson.core#jackson-databind;2.6.7.3!jackson-databind.jar(bundle) (527ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/slf4j/slf4j-api/1.7.7/slf4j-api-1.7.7.jar ...
[info] 	[SUCCESSFUL ] org.slf4j#slf4j-api;1.7.7!slf4j-api.jar (320ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/mockito/mockito-core/1.10.19/mockito-core-1.10.19.jar ...
[info] 	[SUCCESSFUL ] org.mockito#mockito-core;1.10.19!mockito-core.jar (462ms)
[info] Done updating.
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}network-shuffle...
[info] Compiling 78 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-common/target/scala-2.11/classes...
[info] Done updating.
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/target
[info] Done updating.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-common/target/scala-2.11/spark-network-common_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/catalyst/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/catalyst/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/flume/flume-ng-sdk/1.6.0/flume-ng-sdk-1.6.0.jar ...
[info] 	[SUCCESSFUL ] org.apache.flume#flume-ng-sdk;1.6.0!flume-ng-sdk.jar (778ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/flume/flume-ng-core/1.6.0/flume-ng-core-1.6.0.jar ...
[info] 	[SUCCESSFUL ] org.apache.flume#flume-ng-core;1.6.0!flume-ng-core.jar (744ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/avro/avro-compiler/1.7.3/avro-compiler-1.7.3.jar ...
[info] 	[SUCCESSFUL ] org.apache.avro#avro-compiler;1.7.3!avro-compiler.jar (1296ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/avro/avro-ipc/1.7.4/avro-ipc-1.7.4.jar ...
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/target
[info] 	[SUCCESSFUL ] org.apache.avro#avro-ipc;1.7.4!avro-ipc.jar (379ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/io/netty/netty/3.5.12.Final/netty-3.5.12.Final.jar ...
[info] 	[SUCCESSFUL ] io.netty#netty;3.5.12.Final!netty.jar(bundle) (560ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/codehaus/jackson/jackson-core-asl/1.8.8/jackson-core-asl-1.8.8.jar ...
[info] 	[SUCCESSFUL ] org.codehaus.jackson#jackson-core-asl;1.8.8!jackson-core-asl.jar (440ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/codehaus/jackson/jackson-mapper-asl/1.8.8/jackson-mapper-asl-1.8.8.jar ...
[info] 	[SUCCESSFUL ] org.codehaus.jackson#jackson-mapper-asl;1.8.8!jackson-mapper-asl.jar (511ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar ...
[info] 	[SUCCESSFUL ] commons-collections#commons-collections;3.2.1!commons-collections.jar (366ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/flume/flume-ng-configuration/1.6.0/flume-ng-configuration-1.6.0.jar ...
[info] 	[SUCCESSFUL ] org.apache.flume#flume-ng-configuration;1.6.0!flume-ng-configuration.jar (388ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/flume/flume-ng-auth/1.6.0/flume-ng-auth-1.6.0.jar ...
[info] 	[SUCCESSFUL ] org.apache.flume#flume-ng-auth;1.6.0!flume-ng-auth.jar (414ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/commons-codec/commons-codec/1.8/commons-codec-1.8.jar ...
[info] 	[SUCCESSFUL ] commons-codec#commons-codec;1.8!commons-codec.jar (380ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/joda-time/joda-time/2.1/joda-time-2.1.jar ...
[info] 	[SUCCESSFUL ] joda-time#joda-time;2.1!joda-time.jar (429ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/mortbay/jetty/servlet-api/2.5-20110124/servlet-api-2.5-20110124.jar ...
[info] 	[SUCCESSFUL ] org.mortbay.jetty#servlet-api;2.5-20110124!servlet-api.jar (372ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/com/google/code/gson/gson/2.2.2/gson-2.2.2.jar ...
[info] 	[SUCCESSFUL ] com.google.code.gson#gson;2.2.2!gson.jar (339ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/mina/mina-core/2.0.4/mina-core-2.0.4.jar ...
[info] 	[SUCCESSFUL ] org.apache.mina#mina-core;2.0.4!mina-core.jar(bundle) (327ms)
[info] Done updating.
[info] Done updating.
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/com/twitter/chill_2.11/0.9.3/chill_2.11-0.9.3.jar ...
[info] 	[SUCCESSFUL ] com.twitter#chill_2.11;0.9.3!chill_2.11.jar (340ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scalacheck/scalacheck_2.11/1.13.5/scalacheck_2.11-1.13.5.jar ...
[info] 	[SUCCESSFUL ] org.scalacheck#scalacheck_2.11;1.13.5!scalacheck_2.11.jar (558ms)
[info] Done updating.
[info] 'compiler-interface' not yet compiled for Scala 2.11.12. Compiling...
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scalanlp/breeze_2.11/0.13.2/breeze_2.11-0.13.2.jar ...
[info] 	[SUCCESSFUL ] org.scalanlp#breeze_2.11;0.13.2!breeze_2.11.jar (1196ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scalanlp/breeze-macros_2.11/0.13.2/breeze-macros_2.11-0.13.2.jar ...
[info] 	[SUCCESSFUL ] org.scalanlp#breeze-macros_2.11;0.13.2!breeze-macros_2.11.jar (390ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/spire-math/spire_2.11/0.13.0/spire_2.11-0.13.0.jar ...
[info] 	[SUCCESSFUL ] org.spire-math#spire_2.11;0.13.0!spire_2.11.jar (1211ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/com/chuusai/shapeless_2.11/2.3.2/shapeless_2.11-2.3.2.jar ...
[info] 	[SUCCESSFUL ] com.chuusai#shapeless_2.11;2.3.2!shapeless_2.11.jar(bundle) (744ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/spire-math/spire-macros_2.11/0.13.0/spire-macros_2.11-0.13.0.jar ...
[info] 	[SUCCESSFUL ] org.spire-math#spire-macros_2.11;0.13.0!spire-macros_2.11.jar (447ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/typelevel/machinist_2.11/0.6.1/machinist_2.11-0.6.1.jar ...
[info] 	[SUCCESSFUL ] org.typelevel#machinist_2.11;0.6.1!machinist_2.11.jar (343ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/typelevel/macro-compat_2.11/1.1.1/macro-compat_2.11-1.1.1.jar ...
[info] 	[SUCCESSFUL ] org.typelevel#macro-compat_2.11;1.1.1!macro-compat_2.11.jar (316ms)
[info] Done updating.
[info] Done updating.
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}network-yarn...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}core...
[info] Compiling 24 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-shuffle/target/scala-2.11/classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-shuffle/target/scala-2.11/spark-network-shuffle_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[info] Done updating.
[info] Compiling 2 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-yarn/target/scala-2.11/classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-yarn/target/scala-2.11/spark-network-yarn_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[info]   Compilation completed in 13.323 s
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/eclipse/jetty/jetty-plus/9.3.27.v20190418/jetty-plus-9.3.27.v20190418.jar ...
[info] 	[SUCCESSFUL ] org.eclipse.jetty#jetty-plus;9.3.27.v20190418!jetty-plus.jar (484ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/eclipse/jetty/jetty-security/9.3.27.v20190418/jetty-security-9.3.27.v20190418.jar ...
[info] 	[SUCCESSFUL ] org.eclipse.jetty#jetty-security;9.3.27.v20190418!jetty-security.jar (393ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/eclipse/jetty/jetty-util/9.3.27.v20190418/jetty-util-9.3.27.v20190418.jar ...
[info] 	[SUCCESSFUL ] org.eclipse.jetty#jetty-util;9.3.27.v20190418!jetty-util.jar (552ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/eclipse/jetty/jetty-server/9.3.27.v20190418/jetty-server-9.3.27.v20190418.jar ...
[info] 	[SUCCESSFUL ] org.eclipse.jetty#jetty-server;9.3.27.v20190418!jetty-server.jar (603ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/eclipse/jetty/jetty-http/9.3.27.v20190418/jetty-http-9.3.27.v20190418.jar ...
[info] 	[SUCCESSFUL ] org.eclipse.jetty#jetty-http;9.3.27.v20190418!jetty-http.jar (467ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/eclipse/jetty/jetty-continuation/9.3.27.v20190418/jetty-continuation-9.3.27.v20190418.jar ...
[info] 	[SUCCESSFUL ] org.eclipse.jetty#jetty-continuation;9.3.27.v20190418!jetty-continuation.jar (412ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/eclipse/jetty/jetty-servlet/9.3.27.v20190418/jetty-servlet-9.3.27.v20190418.jar ...
[info] 	[SUCCESSFUL ] org.eclipse.jetty#jetty-servlet;9.3.27.v20190418!jetty-servlet.jar (375ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/eclipse/jetty/jetty-proxy/9.3.27.v20190418/jetty-proxy-9.3.27.v20190418.jar ...
[info] 	[SUCCESSFUL ] org.eclipse.jetty#jetty-proxy;9.3.27.v20190418!jetty-proxy.jar (382ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/eclipse/jetty/jetty-client/9.3.27.v20190418/jetty-client-9.3.27.v20190418.jar ...
[info] 	[SUCCESSFUL ] org.eclipse.jetty#jetty-client;9.3.27.v20190418!jetty-client.jar (485ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/eclipse/jetty/jetty-servlets/9.3.27.v20190418/jetty-servlets-9.3.27.v20190418.jar ...
[info] 	[SUCCESSFUL ] org.eclipse.jetty#jetty-servlets;9.3.27.v20190418!jetty-servlets.jar (438ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/json4s/json4s-jackson_2.11/3.5.3/json4s-jackson_2.11-3.5.3.jar ...
[info] 	[SUCCESSFUL ] org.json4s#json4s-jackson_2.11;3.5.3!json4s-jackson_2.11.jar (474ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/com/fasterxml/jackson/module/jackson-module-scala_2.11/2.6.7.1/jackson-module-scala_2.11-2.6.7.1.jar ...
[info] 	[SUCCESSFUL ] com.fasterxml.jackson.module#jackson-module-scala_2.11;2.6.7.1!jackson-module-scala_2.11.jar(bundle) (419ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/eclipse/jetty/jetty-webapp/9.3.27.v20190418/jetty-webapp-9.3.27.v20190418.jar ...
[info] 	[SUCCESSFUL ] org.eclipse.jetty#jetty-webapp;9.3.27.v20190418!jetty-webapp.jar (440ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/eclipse/jetty/jetty-jndi/9.3.27.v20190418/jetty-jndi-9.3.27.v20190418.jar ...
[info] 	[SUCCESSFUL ] org.eclipse.jetty#jetty-jndi;9.3.27.v20190418!jetty-jndi.jar (302ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/eclipse/jetty/jetty-xml/9.3.27.v20190418/jetty-xml-9.3.27.v20190418.jar ...
[info] 	[SUCCESSFUL ] org.eclipse.jetty#jetty-xml;9.3.27.v20190418!jetty-xml.jar (413ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/eclipse/jetty/jetty-io/9.3.27.v20190418/jetty-io-9.3.27.v20190418.jar ...
[info] 	[SUCCESSFUL ] org.eclipse.jetty#jetty-io;9.3.27.v20190418!jetty-io.jar (425ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/json4s/json4s-core_2.11/3.5.3/json4s-core_2.11-3.5.3.jar ...
[info] 	[SUCCESSFUL ] org.json4s#json4s-core_2.11;3.5.3!json4s-core_2.11.jar (364ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/json4s/json4s-ast_2.11/3.5.3/json4s-ast_2.11-3.5.3.jar ...
[info] 	[SUCCESSFUL ] org.json4s#json4s-ast_2.11;3.5.3!json4s-ast_2.11.jar (412ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/json4s/json4s-scalap_2.11/3.5.3/json4s-scalap_2.11-3.5.3.jar ...
[info] 	[SUCCESSFUL ] org.json4s#json4s-scalap_2.11;3.5.3!json4s-scalap_2.11.jar (555ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scala-lang/modules/scala-xml_2.11/1.0.6/scala-xml_2.11-1.0.6.jar ...
[info] 	[SUCCESSFUL ] org.scala-lang.modules#scala-xml_2.11;1.0.6!scala-xml_2.11.jar(bundle) (419ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/seleniumhq/selenium/selenium-java/2.52.0/selenium-java-2.52.0.jar ...
[info] 	[SUCCESSFUL ] org.seleniumhq.selenium#selenium-java;2.52.0!selenium-java.jar (16ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/seleniumhq/selenium/selenium-htmlunit-driver/2.52.0/selenium-htmlunit-driver-2.52.0.jar ...
[info] 	[SUCCESSFUL ] org.seleniumhq.selenium#selenium-htmlunit-driver;2.52.0!selenium-htmlunit-driver.jar (28ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/curator/curator-test/2.6.0/curator-test-2.6.0.jar ...
[info] 	[SUCCESSFUL ] org.apache.curator#curator-test;2.6.0!curator-test.jar (308ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/seleniumhq/selenium/selenium-chrome-driver/2.52.0/selenium-chrome-driver-2.52.0.jar ...
[info] 	[SUCCESSFUL ] org.seleniumhq.selenium#selenium-chrome-driver;2.52.0!selenium-chrome-driver.jar (18ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/seleniumhq/selenium/selenium-edge-driver/2.52.0/selenium-edge-driver-2.52.0.jar ...
[info] 	[SUCCESSFUL ] org.seleniumhq.selenium#selenium-edge-driver;2.52.0!selenium-edge-driver.jar (17ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/seleniumhq/selenium/selenium-firefox-driver/2.52.0/selenium-firefox-driver-2.52.0.jar ...
[info] 	[SUCCESSFUL ] org.seleniumhq.selenium#selenium-firefox-driver;2.52.0!selenium-firefox-driver.jar (47ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/seleniumhq/selenium/selenium-ie-driver/2.52.0/selenium-ie-driver-2.52.0.jar ...
[info] 	[SUCCESSFUL ] org.seleniumhq.selenium#selenium-ie-driver;2.52.0!selenium-ie-driver.jar (18ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/seleniumhq/selenium/selenium-safari-driver/2.52.0/selenium-safari-driver-2.52.0.jar ...
[info] 	[SUCCESSFUL ] org.seleniumhq.selenium#selenium-safari-driver;2.52.0!selenium-safari-driver.jar (20ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/seleniumhq/selenium/selenium-support/2.52.0/selenium-support-2.52.0.jar ...
[info] 	[SUCCESSFUL ] org.seleniumhq.selenium#selenium-support;2.52.0!selenium-support.jar (22ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/webbitserver/webbit/0.4.14/webbit-0.4.14.jar ...
[info] 	[SUCCESSFUL ] org.webbitserver#webbit;0.4.14!webbit.jar (29ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/seleniumhq/selenium/selenium-leg-rc/2.52.0/selenium-leg-rc-2.52.0.jar ...
[info] 	[SUCCESSFUL ] org.seleniumhq.selenium#selenium-leg-rc;2.52.0!selenium-leg-rc.jar (31ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/seleniumhq/selenium/selenium-remote-driver/2.52.0/selenium-remote-driver-2.52.0.jar ...
[info] 	[SUCCESSFUL ] org.seleniumhq.selenium#selenium-remote-driver;2.52.0!selenium-remote-driver.jar (27ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/cglib/cglib-nodep/2.1_3/cglib-nodep-2.1_3.jar ...
[info] 	[SUCCESSFUL ] cglib#cglib-nodep;2.1_3!cglib-nodep.jar (32ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/com/google/code/gson/gson/2.3.1/gson-2.3.1.jar ...
[info] 	[SUCCESSFUL ] com.google.code.gson#gson;2.3.1!gson.jar (23ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/seleniumhq/selenium/selenium-api/2.52.0/selenium-api-2.52.0.jar ...
[info] 	[SUCCESSFUL ] org.seleniumhq.selenium#selenium-api;2.52.0!selenium-api.jar (23ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/httpcomponents/httpclient/4.5.1/httpclient-4.5.1.jar ...
[info] 	[SUCCESSFUL ] org.apache.httpcomponents#httpclient;4.5.1!httpclient.jar (336ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/net/java/dev/jna/jna/4.1.0/jna-4.1.0.jar ...
[info] 	[SUCCESSFUL ] net.java.dev.jna#jna;4.1.0!jna.jar (59ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/net/java/dev/jna/jna-platform/4.1.0/jna-platform-4.1.0.jar ...
[info] 	[SUCCESSFUL ] net.java.dev.jna#jna-platform;4.1.0!jna-platform.jar (95ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/httpcomponents/httpcore/4.4.3/httpcore-4.4.3.jar ...
[info] 	[SUCCESSFUL ] org.apache.httpcomponents#httpcore;4.4.3!httpcore.jar (364ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/net/sourceforge/htmlunit/htmlunit/2.18/htmlunit-2.18.jar ...
[info] 	[SUCCESSFUL ] net.sourceforge.htmlunit#htmlunit;2.18!htmlunit.jar (559ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/httpcomponents/httpmime/4.5/httpmime-4.5.jar ...
[info] 	[SUCCESSFUL ] org.apache.httpcomponents#httpmime;4.5!httpmime.jar (486ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/net/sourceforge/htmlunit/htmlunit-core-js/2.17/htmlunit-core-js-2.17.jar ...
[info] 	[SUCCESSFUL ] net.sourceforge.htmlunit#htmlunit-core-js;2.17!htmlunit-core-js.jar (507ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/xerces/xercesImpl/2.11.0/xercesImpl-2.11.0.jar ...
[info] 	[SUCCESSFUL ] xerces#xercesImpl;2.11.0!xercesImpl.jar (505ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/net/sourceforge/nekohtml/nekohtml/1.9.22/nekohtml-1.9.22.jar ...
[info] 	[SUCCESSFUL ] net.sourceforge.nekohtml#nekohtml;1.9.22!nekohtml.jar (390ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/net/sourceforge/cssparser/cssparser/0.9.16/cssparser-0.9.16.jar ...
[info] 	[SUCCESSFUL ] net.sourceforge.cssparser#cssparser;0.9.16!cssparser.jar (392ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/eclipse/jetty/websocket/websocket-client/9.2.12.v20150709/websocket-client-9.2.12.v20150709.jar ...
[info] 	[SUCCESSFUL ] org.eclipse.jetty.websocket#websocket-client;9.2.12.v20150709!websocket-client.jar (443ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/w3c/css/sac/1.3/sac-1.3.jar ...
[info] 	[SUCCESSFUL ] org.w3c.css#sac;1.3!sac.jar (17ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/eclipse/jetty/websocket/websocket-common/9.2.12.v20150709/websocket-common-9.2.12.v20150709.jar ...
[info] 	[SUCCESSFUL ] org.eclipse.jetty.websocket#websocket-common;9.2.12.v20150709!websocket-common.jar (352ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/eclipse/jetty/websocket/websocket-api/9.2.12.v20150709/websocket-api-9.2.12.v20150709.jar ...
[info] 	[SUCCESSFUL ] org.eclipse.jetty.websocket#websocket-api;9.2.12.v20150709!websocket-api.jar (384ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/calcite/calcite-avatica/1.2.0-incubating/calcite-avatica-1.2.0-incubating.jar ...
[info] 	[SUCCESSFUL ] org.apache.calcite#calcite-avatica;1.2.0-incubating!calcite-avatica.jar (407ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/httpcomponents/httpclient/4.4.1/httpclient-4.4.1.jar ...
[info] 	[SUCCESSFUL ] org.apache.httpcomponents#httpclient;4.4.1!httpclient.jar (369ms)
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.7-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}kubernetes...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}yarn...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}mesos...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}catalyst...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}ganglia-lgpl...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}graphx...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/tags/target/scala-2.11/spark-tags_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 6 Scala sources and 3 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-sink/target/scala-2.11/classes...
[info] Compiling 16 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/unsafe/target/scala-2.11/classes...
[info] Compiling 3 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/tags/target/scala-2.11/test-classes...
[info] Compiling 20 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/launcher/target/scala-2.11/classes...
[info] Compiling 5 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib-local/target/scala-2.11/classes...
[info] Compiling 9 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/target/scala-2.11/classes...
[info] Compiling 12 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/kvstore/target/scala-2.11/classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/tools/target/scala-2.11/spark-tools_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/tools/target/scala-2.11/spark-tools_2.11-2.4.7-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/tags/target/scala-2.11/spark-tags_2.11-2.4.7-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Compiling 21 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-common/target/scala-2.11/test-classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-yarn/target/scala-2.11/spark-network-yarn_2.11-2.4.7-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/kvstore/target/scala-2.11/spark-kvstore_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 10 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/kvstore/target/scala-2.11/test-classes...
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/src/main/java/org/apache/spark/util/sketch/Platform.java:22:  Unsafe is internal proprietary API and may be removed in a future release
[warn] import sun.misc.Unsafe;
[warn]                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/src/main/java/org/apache/spark/util/sketch/Platform.java:28:  Unsafe is internal proprietary API and may be removed in a future release
[warn]   private static final Unsafe _UNSAFE;
[warn]                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/src/main/java/org/apache/spark/util/sketch/Platform.java:150:  Unsafe is internal proprietary API and may be removed in a future release
[warn]     sun.misc.Unsafe unsafe;
[warn]             ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/src/main/java/org/apache/spark/util/sketch/Platform.java:152:  Unsafe is internal proprietary API and may be removed in a future release
[warn]       Field unsafeField = Unsafe.class.getDeclaredField("theUnsafe");
[warn]                           ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/src/main/java/org/apache/spark/util/sketch/Platform.java:154:  Unsafe is internal proprietary API and may be removed in a future release
[warn]       unsafe = (sun.misc.Unsafe) unsafeField.get(null);
[warn]                         ^
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/target/scala-2.11/spark-sketch_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 3 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/target/scala-2.11/test-classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/unsafe/target/scala-2.11/spark-unsafe_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 1 Scala source and 5 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/unsafe/target/scala-2.11/test-classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/launcher/target/scala-2.11/spark-launcher_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 7 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/launcher/target/scala-2.11/test-classes...
[info] Compiling 494 Scala sources and 81 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/classes...
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-common/src/test/java/org/apache/spark/network/server/OneForOneStreamManagerSuite.java:61:  [unchecked] unchecked generic array creation for varargs parameter of type Class<? extends Throwable>[]
[warn]     Mockito.when(buffers.next()).thenThrow(RuntimeException.class);
[warn]                                           ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-common/src/test/java/org/apache/spark/network/server/OneForOneStreamManagerSuite.java:68:  [unchecked] unchecked generic array creation for varargs parameter of type Class<? extends Throwable>[]
[warn]     Mockito.when(buffers2.next()).thenReturn(mockManagedBuffer).thenThrow(RuntimeException.class);
[warn]                                                                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-sink/target/scala-2.11/src_managed/main/compiled_avro/org/apache/spark/streaming/flume/sink/EventBatch.java:243:  [unchecked] unchecked cast
[warn]         record.events = fieldSetFlags()[2] ? this.events : (java.util.List<org.apache.spark.streaming.flume.sink.SparkSinkEvent>) defaultValue(fields()[2]);
[warn]                                                                                                                                               ^
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-sink/target/scala-2.11/spark-streaming-flume-sink_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 1 Scala source to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-sink/target/scala-2.11/test-classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/kvstore/target/scala-2.11/spark-kvstore_2.11-2.4.7-SNAPSHOT-tests.jar ...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-common/target/scala-2.11/spark-network-common_2.11-2.4.7-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Done packaging.
[info] Compiling 13 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-shuffle/target/scala-2.11/test-classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/launcher/target/scala-2.11/spark-launcher_2.11-2.4.7-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/target/scala-2.11/spark-sketch_2.11-2.4.7-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/io/dropwizard/metrics/metrics-ganglia/3.1.5/metrics-ganglia-3.1.5.jar ...
[info] 	[SUCCESSFUL ] io.dropwizard.metrics#metrics-ganglia;3.1.5!metrics-ganglia.jar(bundle) (406ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/info/ganglia/gmetric4j/gmetric4j/1.0.7/gmetric4j-1.0.7.jar ...
[info] 	[SUCCESSFUL ] info.ganglia.gmetric4j#gmetric4j;1.0.7!gmetric4j.jar (405ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/acplt/oncrpc/1.0.7/oncrpc-1.0.7.jar ...
[info] 	[SUCCESSFUL ] org.acplt#oncrpc;1.0.7!oncrpc.jar (394ms)
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.7-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-sink/target/scala-2.11/spark-streaming-flume-sink_2.11-2.4.7-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-shuffle/target/scala-2.11/spark-network-shuffle_2.11-2.4.7-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib-local/target/scala-2.11/spark-mllib-local_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 10 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib-local/target/scala-2.11/test-classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/unsafe/target/scala-2.11/spark-unsafe_2.11-2.4.7-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.7-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib-local/target/scala-2.11/spark-mllib-local_2.11-2.4.7-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.7-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-kafka-0-8...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-flume...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-kafka-0-10...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-kinesis-asl...
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scala-lang/modules/scala-parser-combinators_2.11/1.1.0/scala-parser-combinators_2.11-1.1.0.jar ...
[info] 	[SUCCESSFUL ] org.scala-lang.modules#scala-parser-combinators_2.11;1.1.0!scala-parser-combinators_2.11.jar(bundle) (369ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/codehaus/janino/janino/3.0.16/janino-3.0.16.jar ...
[info] 	[SUCCESSFUL ] org.codehaus.janino#janino;3.0.16!janino.jar (51ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/codehaus/janino/commons-compiler/3.0.16/commons-compiler-3.0.16.jar ...
[info] 	[SUCCESSFUL ] org.codehaus.janino#commons-compiler;3.0.16!commons-compiler.jar (18ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/antlr/antlr4/4.7/antlr4-4.7.jar ...
[info] 	[SUCCESSFUL ] org.antlr#antlr4;4.7!antlr4.jar (493ms)
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.7-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}sql...
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.7-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/orc/orc-core/1.5.5/orc-core-1.5.5-nohive.jar ...
[info] 	[SUCCESSFUL ] org.apache.orc#orc-core;1.5.5!orc-core.jar (613ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/orc/orc-mapreduce/1.5.5/orc-mapreduce-1.5.5-nohive.jar ...
[info] 	[SUCCESSFUL ] org.apache.orc#orc-mapreduce;1.5.5!orc-mapreduce.jar (581ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/orc/orc-shims/1.5.5/orc-shims-1.5.5.jar ...
[info] 	[SUCCESSFUL ] org.apache.orc#orc-shims;1.5.5!orc-shims.jar (374ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/mysql/mysql-connector-java/5.1.38/mysql-connector-java-5.1.38.jar ...
[info] 	[SUCCESSFUL ] mysql#mysql-connector-java;5.1.38!mysql-connector-java.jar (55ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/postgresql/postgresql/9.4.1207.jre7/postgresql-9.4.1207.jre7.jar ...
[info] 	[SUCCESSFUL ] org.postgresql#postgresql;9.4.1207.jre7!postgresql.jar(bundle) (396ms)
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9
[warn] 	    +- org.apache.arrow:arrow-vector:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.arrow:arrow-memory:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.spark:spark-unsafe_2.11:2.4.7-SNAPSHOT  (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.7-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.hadoop:hadoop-common:2.6.5              (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-network-common_2.11:2.4.7-SNAPSHOT (depends on 1.3.9)
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.7-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}sql-kafka-0-10...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}hive...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}avro...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}mllib...
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:161: method isRunningLocally in class TaskContext is deprecated: Local execution was removed, so this always returns false
[warn]   override def isRunningLocally(): Boolean = taskContext.isRunningLocally()
[warn]                                                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/api/r/RBackend.scala:102: method childGroup in class ServerBootstrap is deprecated: see corresponding Javadoc for more information.
[warn]     if (bootstrap != null && bootstrap.childGroup() != null) {
[warn]                                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/scheduler/StageInfo.scala:59: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]   def attemptNumber(): Int = attemptId
[warn]                              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn]                             ^
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/com/fasterxml/jackson/dataformat/jackson-dataformat-cbor/2.6.7/jackson-dataformat-cbor-2.6.7.jar ...
[info] 	[SUCCESSFUL ] com.fasterxml.jackson.dataformat#jackson-dataformat-cbor;2.6.7!jackson-dataformat-cbor.jar(bundle) (397ms)
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.7-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-kinesis-asl-assembly...
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/kafka/kafka_2.11/2.0.0/kafka_2.11-2.0.0.jar ...
[info] 	[SUCCESSFUL ] org.apache.kafka#kafka_2.11;2.0.0!kafka_2.11.jar (869ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/com/typesafe/scala-logging/scala-logging_2.11/3.9.0/scala-logging_2.11-3.9.0.jar ...
[info] 	[SUCCESSFUL ] com.typesafe.scala-logging#scala-logging_2.11;3.9.0!scala-logging_2.11.jar(bundle) (403ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/com/101tec/zkclient/0.10/zkclient-0.10.jar ...
[info] 	[SUCCESSFUL ] com.101tec#zkclient;0.10!zkclient.jar (332ms)
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.7-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-kafka-0-10-assembly...
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.5.12.Final, 3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.7-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 	    +- org.apache.flume:flume-ng-core:1.6.0               (depends on 3.5.12.Final)
[warn] 	    +- org.apache.flume:flume-ng-sdk:1.6.0                (depends on 3.5.12.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-flume-assembly...
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/kafka/kafka_2.11/0.8.2.1/kafka_2.11-0.8.2.1.jar ...
[info] 	[SUCCESSFUL ] org.apache.kafka#kafka_2.11;0.8.2.1!kafka_2.11.jar (687ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scala-lang/modules/scala-parser-combinators_2.11/1.0.2/scala-parser-combinators_2.11-1.0.2.jar ...
[info] 	[SUCCESSFUL ] org.scala-lang.modules#scala-parser-combinators_2.11;1.0.2!scala-parser-combinators_2.11.jar(bundle) (458ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/com/101tec/zkclient/0.3/zkclient-0.3.jar ...
[info] 	[SUCCESSFUL ] com.101tec#zkclient;0.3!zkclient.jar (361ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/kafka/kafka-clients/0.8.2.1/kafka-clients-0.8.2.1.jar ...
[info] 	[SUCCESSFUL ] org.apache.kafka#kafka-clients;0.8.2.1!kafka-clients.jar (419ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/net/jpountz/lz4/lz4/1.2.0/lz4-1.2.0.jar ...
[info] 	[SUCCESSFUL ] net.jpountz.lz4#lz4;1.2.0!lz4.jar (1848ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/net/sf/jopt-simple/jopt-simple/3.2/jopt-simple-3.2.jar ...
[info] 	[SUCCESSFUL ] net.sf.jopt-simple#jopt-simple;3.2!jopt-simple.jar (466ms)
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.7-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-kafka-0-8-assembly...
[warn] four warnings found
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/hadoop/hadoop-yarn-server-web-proxy/2.6.5/hadoop-yarn-server-web-proxy-2.6.5.jar ...
[info] 	[SUCCESSFUL ] org.apache.hadoop#hadoop-yarn-server-web-proxy;2.6.5!hadoop-yarn-server-web-proxy.jar (415ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/hadoop/hadoop-yarn-server-tests/2.6.5/hadoop-yarn-server-tests-2.6.5-tests.jar ...
[info] 	[SUCCESSFUL ] org.apache.hadoop#hadoop-yarn-server-tests;2.6.5!hadoop-yarn-server-tests.jar (504ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/hadoop/hadoop-yarn-server-resourcemanager/2.6.5/hadoop-yarn-server-resourcemanager-2.6.5.jar ...
[info] 	[SUCCESSFUL ] org.apache.hadoop#hadoop-yarn-server-resourcemanager;2.6.5!hadoop-yarn-server-resourcemanager.jar (621ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/hadoop/hadoop-yarn-server-applicationhistoryservice/2.6.5/hadoop-yarn-server-applicationhistoryservice-2.6.5.jar ...
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:161: method isRunningLocally in class TaskContext is deprecated: Local execution was removed, so this always returns false
[warn]   override def isRunningLocally(): Boolean = taskContext.isRunningLocally()
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/scheduler/StageInfo.scala:59: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]   def attemptNumber(): Int = attemptId
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/api/r/RBackend.scala:102: method childGroup in class ServerBootstrap is deprecated: see corresponding Javadoc for more information.
[warn]     if (bootstrap != null && bootstrap.childGroup() != null) {
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] 	[SUCCESSFUL ] org.apache.hadoop#hadoop-yarn-server-applicationhistoryservice;2.6.5!hadoop-yarn-server-applicationhistoryservice.jar (420ms)
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.7-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/spark-core_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 20 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/target/scala-2.11/classes...
[info] Compiling 38 Scala sources and 5 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/graphx/target/scala-2.11/classes...
[info] Compiling 1 Scala source to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/spark-ganglia-lgpl/target/scala-2.11/classes...
[info] Compiling 103 Scala sources and 6 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/streaming/target/scala-2.11/classes...
[info] Compiling 26 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/yarn/target/scala-2.11/classes...
[info] Compiling 240 Scala sources and 31 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/catalyst/target/scala-2.11/classes...
[info] Compiling 240 Scala sources and 26 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/test-classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/spark-ganglia-lgpl/target/scala-2.11/spark-ganglia-lgpl_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/spark-ganglia-lgpl/target/scala-2.11/spark-ganglia-lgpl_2.11-2.4.7-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:87: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       fwInfoBuilder.setRole(role)
[warn]                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:224: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]     role.foreach { r => builder.setRole(r) }
[warn]                                 ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:260: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]             Option(r.getRole), reservation)
[warn]                      ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:263: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]             Option(r.getRole), reservation)
[warn]                      ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:521: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       role.foreach { r => builder.setRole(r) }
[warn]                                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:537: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]         (RoleResourceInfo(resource.getRole, reservation),
[warn]                                    ^
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/yarn/target/scala-2.11/spark-yarn_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[warn] 6 warnings found
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:87: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       fwInfoBuilder.setRole(role)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:224: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]     role.foreach { r => builder.setRole(r) }
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:260: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]             Option(r.getRole), reservation)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:263: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]             Option(r.getRole), reservation)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:521: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       role.foreach { r => builder.setRole(r) }
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:537: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]         (RoleResourceInfo(resource.getRole, reservation),
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/target/scala-2.11/spark-mesos_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/graphx/target/scala-2.11/spark-graphx_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/io/fabric8/kubernetes-client/4.6.1/kubernetes-client-4.6.1.jar ...
[info] 	[SUCCESSFUL ] io.fabric8#kubernetes-client;4.6.1!kubernetes-client.jar (544ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/com/fasterxml/jackson/dataformat/jackson-dataformat-yaml/2.6.7/jackson-dataformat-yaml-2.6.7.jar ...
[info] 	[SUCCESSFUL ] com.fasterxml.jackson.dataformat#jackson-dataformat-yaml;2.6.7!jackson-dataformat-yaml.jar(bundle) (419ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/io/fabric8/kubernetes-model/4.6.1/kubernetes-model-4.6.1.jar ...
[info] 	[SUCCESSFUL ] io.fabric8#kubernetes-model;4.6.1!kubernetes-model.jar(bundle) (934ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/com/squareup/okhttp3/okhttp/3.12.0/okhttp-3.12.0.jar ...
[info] 	[SUCCESSFUL ] com.squareup.okhttp3#okhttp;3.12.0!okhttp.jar (439ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/com/squareup/okhttp3/logging-interceptor/3.12.0/logging-interceptor-3.12.0.jar ...
[info] 	[SUCCESSFUL ] com.squareup.okhttp3#logging-interceptor;3.12.0!logging-interceptor.jar (353ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/slf4j/slf4j-api/1.7.26/slf4j-api-1.7.26.jar ...
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/streaming/target/scala-2.11/spark-streaming_2.11-2.4.7-SNAPSHOT.jar ...
[info] 	[SUCCESSFUL ] org.slf4j#slf4j-api;1.7.26!slf4j-api.jar (288ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/slf4j/jul-to-slf4j/1.7.26/jul-to-slf4j-1.7.26.jar ...
[info] Done packaging.
[info] Compiling 11 Scala sources and 2 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/target/scala-2.11/classes...
[info] Compiling 10 Scala sources and 2 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/target/scala-2.11/classes...
[info] Compiling 11 Scala sources and 1 Java source to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/target/scala-2.11/classes...
[info] Compiling 10 Scala sources and 1 Java source to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/target/scala-2.11/classes...
[info] 	[SUCCESSFUL ] org.slf4j#jul-to-slf4j;1.7.26!jul-to-slf4j.jar (427ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/io/fabric8/kubernetes-model-common/4.6.1/kubernetes-model-common-4.6.1.jar ...
[info] 	[SUCCESSFUL ] io.fabric8#kubernetes-model-common;4.6.1!kubernetes-model-common.jar (1487ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/com/fasterxml/jackson/module/jackson-module-jaxb-annotations/2.9.9/jackson-module-jaxb-annotations-2.9.9.jar ...
[info] 	[SUCCESSFUL ] com.fasterxml.jackson.module#jackson-module-jaxb-annotations;2.9.9!jackson-module-jaxb-annotations.jar(bundle) (338ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/yaml/snakeyaml/1.15/snakeyaml-1.15.jar ...
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumeEventCount.scala:59: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createStream(ssc, host, port, StorageLevel.MEMORY_ONLY_SER_2)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createPollingStream(ssc, host, port)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:259: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val dstream = FlumeUtils.createStream(jssc, hostname, port, storageLevel, enableDecompression)
[warn]                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:275: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val dstream = FlumeUtils.createPollingStream(
[warn]                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:100: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:153: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/DirectKafkaInputDStream.scala:172: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]     val msgs = c.poll(0)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaDataConsumer.scala:200: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     val p = consumer.poll(timeout)
[warn]                      ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:107: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val kinesisClient = new AmazonKinesisClient(credentials)
[warn]                         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:108: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     kinesisClient.setEndpoint(endpointUrl)
[warn]                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:223: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val kinesisClient = new AmazonKinesisClient(new DefaultAWSCredentialsProviderChain())
[warn]                         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:224: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     kinesisClient.setEndpoint(endpoint)
[warn]                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:140: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]   private val client = new AmazonKinesisClient(credentials)
[warn]                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:150: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]   client.setEndpoint(endpointUrl)
[warn]          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:219: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information.
[warn]     getRecordsRequest.setRequestCredentials(credentials)
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:240: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information.
[warn]     getShardIteratorRequest.setRequestCredentials(credentials)
[warn]                             ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisReceiver.scala:187: constructor Worker in class Worker is deprecated: see corresponding Javadoc for more information.
[warn]     worker = new Worker(recordProcessorFactory, kinesisClientLibConfiguration)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:58: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val client = new AmazonKinesisClient(KinesisTestUtils.getAWSCredentials())
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:59: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     client.setEndpoint(endpointUrl)
[warn]            ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:64: constructor AmazonDynamoDBClient in class AmazonDynamoDBClient is deprecated: see corresponding Javadoc for more information.
[warn]     val dynamoDBClient = new AmazonDynamoDBClient(new DefaultAWSCredentialsProviderChain())
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:65: method setRegion in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     dynamoDBClient.setRegion(RegionUtils.getRegion(regionName))
[warn]                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:606: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]       KinesisUtils.createStream(jssc.ssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn]                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:613: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]         KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn]                      ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:616: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]         KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn]                      ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/SparkAWSCredentials.scala:76: method withLongLivedCredentialsProvider in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       .withLongLivedCredentialsProvider(longLivedCreds.provider)
[warn]        ^
[info] 	[SUCCESSFUL ] org.yaml#snakeyaml;1.15!snakeyaml.jar(bundle) (413ms)
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.7-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Compiling 36 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/kubernetes/core/target/scala-2.11/classes...
[warn] four warnings found
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:89: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   protected val kc = new KafkaCluster(kafkaParams)
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:172: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       OffsetRange(tp.topic, tp.partition, fo, uo.offset)
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:217: object KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val leaders = KafkaCluster.checkErrors(kc.findLeaders(topics))
[warn]                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:222: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]            context.sparkContext, kafkaParams, b.map(OffsetRange(_)), leaders, messageHandler)
[warn]                                                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:58: trait HasOffsetRanges in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   ) extends RDD[R](sc, Nil) with Logging with HasOffsetRanges {
[warn]                                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val offsetRanges: Array[OffsetRange],
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:152: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val kc = new KafkaCluster(kafkaParams)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:268: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]         OffsetRange(tp.topic, tp.partition, fo, uo.offset)
[warn]         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:633: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder](
[warn]     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:647: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn]                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:648: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker]): JavaRDD[(Array[Byte], Array[Byte])] = {
[warn]                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:657: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn]                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:658: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker]): JavaRDD[Array[Byte]] = {
[warn]                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:670: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn]                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:671: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker],
[warn]                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:673: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createRDD[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V](
[warn]     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:676: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges.toArray(new Array[OffsetRange](offsetRanges.size())),
[warn]                                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:720: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val kc = new KafkaCluster(Map(kafkaParams.asScala.toSeq: _*))
[warn]                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:721: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       KafkaUtils.getFromOffsets(
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:725: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createDirectStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V](
[warn]     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset)
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset)
[warn]                      ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def createBroker(host: String, port: JInt): Broker = Broker(host, port)
[warn]                                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: object Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def createBroker(host: String, port: JInt): Broker = Broker(host, port)
[warn]                                                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:740: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def offsetRangesOfKafkaRDD(rdd: RDD[_]): JList[OffsetRange] = {
[warn]                                            ^
[warn] four warnings found
[warn] 17 warnings found
[warn] 25 warnings found
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:259: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val dstream = FlumeUtils.createStream(jssc, hostname, port, storageLevel, enableDecompression)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:275: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val dstream = FlumeUtils.createPollingStream(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createPollingStream(ssc, host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createPollingStream(ssc, host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumeEventCount.scala:59: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createStream(ssc, host, port, StorageLevel.MEMORY_ONLY_SER_2)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/target/scala-2.11/spark-streaming-flume_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/DirectKafkaInputDStream.scala:172: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]     val msgs = c.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:100: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:153: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaDataConsumer.scala:200: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     val p = consumer.poll(timeout)
[warn] 
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/target/scala-2.11/spark-streaming-kafka-0-10_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:633: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder](
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:647: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:648: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker]): JavaRDD[(Array[Byte], Array[Byte])] = {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:657: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:658: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker]): JavaRDD[Array[Byte]] = {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:670: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:671: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:673: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createRDD[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V](
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:676: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges.toArray(new Array[OffsetRange](offsetRanges.size())),
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:720: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val kc = new KafkaCluster(Map(kafkaParams.asScala.toSeq: _*))
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:721: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       KafkaUtils.getFromOffsets(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:725: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createDirectStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V](
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def createBroker(host: String, port: JInt): Broker = Broker(host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: object Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def createBroker(host: String, port: JInt): Broker = Broker(host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:740: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def offsetRangesOfKafkaRDD(rdd: RDD[_]): JList[OffsetRange] = {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:89: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   protected val kc = new KafkaCluster(kafkaParams)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:172: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       OffsetRange(tp.topic, tp.partition, fo, uo.offset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:217: object KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val leaders = KafkaCluster.checkErrors(kc.findLeaders(topics))
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:222: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]            context.sparkContext, kafkaParams, b.map(OffsetRange(_)), leaders, messageHandler)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:58: trait HasOffsetRanges in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   ) extends RDD[R](sc, Nil) with Logging with HasOffsetRanges {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val offsetRanges: Array[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val offsetRanges: Array[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:152: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val kc = new KafkaCluster(kafkaParams)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:268: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]         OffsetRange(tp.topic, tp.partition, fo, uo.offset)
[warn] 
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/target/scala-2.11/spark-streaming-kafka-0-8_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[info] Note: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/java/org/apache/spark/examples/streaming/JavaKinesisWordCountASL.java uses or overrides a deprecated API.
[info] Note: Recompile with -Xlint:deprecation for details.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:140: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]   private val client = new AmazonKinesisClient(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:150: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]   client.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:219: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information.
[warn]     getRecordsRequest.setRequestCredentials(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:240: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information.
[warn]     getShardIteratorRequest.setRequestCredentials(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:606: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]       KinesisUtils.createStream(jssc.ssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:613: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]         KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:616: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]         KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:107: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val kinesisClient = new AmazonKinesisClient(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:108: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     kinesisClient.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:223: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val kinesisClient = new AmazonKinesisClient(new DefaultAWSCredentialsProviderChain())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:224: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     kinesisClient.setEndpoint(endpoint)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/SparkAWSCredentials.scala:76: method withLongLivedCredentialsProvider in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       .withLongLivedCredentialsProvider(longLivedCreds.provider)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:58: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val client = new AmazonKinesisClient(KinesisTestUtils.getAWSCredentials())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:59: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     client.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:64: constructor AmazonDynamoDBClient in class AmazonDynamoDBClient is deprecated: see corresponding Javadoc for more information.
[warn]     val dynamoDBClient = new AmazonDynamoDBClient(new DefaultAWSCredentialsProviderChain())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:65: method setRegion in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     dynamoDBClient.setRegion(RegionUtils.getRegion(regionName))
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisReceiver.scala:187: constructor Worker in class Worker is deprecated: see corresponding Javadoc for more information.
[warn]     worker = new Worker(recordProcessorFactory, kinesisClientLibConfiguration)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/target/scala-2.11/spark-streaming-kinesis-asl_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/kubernetes/core/target/scala-2.11/spark-kubernetes_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.7.0.Final, 3.6.2.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.7-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.7.0.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8-assembly/target/scala-2.11/spark-streaming-kafka-0-8-assembly_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8-assembly/target/scala-2.11/spark-streaming-kafka-0-8-assembly_2.11-2.4.7-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.5.12.Final, 3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.7-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 	    +- org.apache.flume:flume-ng-core:1.6.0               (depends on 3.5.12.Final)
[warn] 	    +- org.apache.flume:flume-ng-sdk:1.6.0                (depends on 3.5.12.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-assembly/target/scala-2.11/spark-streaming-flume-assembly_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-assembly/target/scala-2.11/spark-streaming-flume-assembly_2.11-2.4.7-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.7.0.Final, 3.6.2.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.7-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.7.0.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-assembly/target/scala-2.11/spark-streaming-kafka-0-10-assembly_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-assembly/target/scala-2.11/spark-streaming-kafka-0-10-assembly_2.11-2.4.7-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/util/collection/ExternalAppendOnlyMapSuite.scala:461: postfix operator seconds should be enabled
[warn] by making the implicit value scala.language.postfixOps visible.
[warn] This can be achieved by adding the import clause 'import scala.language.postfixOps'
[warn] or by setting the compiler option -language:postfixOps.
[warn] See the Scaladoc for value scala.language.postfixOps for a discussion
[warn] why the feature should be explicitly enabled.
[warn]     eventually(timeout(5 seconds), interval(200 milliseconds)) {
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/util/collection/ExternalAppendOnlyMapSuite.scala:461: postfix operator milliseconds should be enabled
[warn] by making the implicit value scala.language.postfixOps visible.
[warn]     eventually(timeout(5 seconds), interval(200 milliseconds)) {
[warn]                                                 ^
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.7.0.Final, 3.6.2.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.7-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.7.0.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl-assembly/target/scala-2.11/spark-streaming-kinesis-asl-assembly_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl-assembly/target/scala-2.11/spark-streaming-kinesis-asl-assembly_2.11-2.4.7-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9
[warn] 	    +- org.apache.arrow:arrow-vector:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.arrow:arrow-memory:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.spark:spark-unsafe_2.11:2.4.7-SNAPSHOT  (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.7-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.hadoop:hadoop-common:2.6.5              (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-network-common_2.11:2.4.7-SNAPSHOT (depends on 1.3.9)
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.7-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:48: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]   implicit def setAccum[A]: AccumulableParam[mutable.Set[A], A] =
[warn]                             ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:49: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     new AccumulableParam[mutable.Set[A], A] {
[warn]         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:86: class Accumulator in package spark is deprecated: use AccumulatorV2
[warn]     val acc: Accumulator[Int] = sc.accumulator(0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:86: method accumulator in class SparkContext is deprecated: use AccumulatorV2
[warn]     val acc: Accumulator[Int] = sc.accumulator(0)
[warn]                                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:86: object IntAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2
[warn]     val acc: Accumulator[Int] = sc.accumulator(0)
[warn]                                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:92: method accumulator in class SparkContext is deprecated: use AccumulatorV2
[warn]     val longAcc = sc.accumulator(0L)
[warn]                      ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:92: object LongAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2
[warn]     val longAcc = sc.accumulator(0L)
[warn]                                 ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:100: class Accumulator in package spark is deprecated: use AccumulatorV2
[warn]     val acc: Accumulator[Int] = sc.accumulator(0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:100: method accumulator in class SparkContext is deprecated: use AccumulatorV2
[warn]     val acc: Accumulator[Int] = sc.accumulator(0)
[warn]                                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:100: object IntAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2
[warn]     val acc: Accumulator[Int] = sc.accumulator(0)
[warn]                                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:112: class Accumulable in package spark is deprecated: use AccumulatorV2
[warn]       val acc: Accumulable[mutable.Set[Any], Any] = sc.accumulable(new mutable.HashSet[Any]())
[warn]                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:112: method accumulable in class SparkContext is deprecated: use AccumulatorV2
[warn]       val acc: Accumulable[mutable.Set[Any], Any] = sc.accumulable(new mutable.HashSet[Any]())
[warn]                                                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:129: class Accumulable in package spark is deprecated: use AccumulatorV2
[warn]       val acc: Accumulable[mutable.Set[Any], Any] = sc.accumulable(new mutable.HashSet[Any]())
[warn]                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:129: method accumulable in class SparkContext is deprecated: use AccumulatorV2
[warn]       val acc: Accumulable[mutable.Set[Any], Any] = sc.accumulable(new mutable.HashSet[Any]())
[warn]                                                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:145: method accumulableCollection in class SparkContext is deprecated: use AccumulatorV2
[warn]       val setAcc = sc.accumulableCollection(mutable.HashSet[Int]())
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:146: method accumulableCollection in class SparkContext is deprecated: use AccumulatorV2
[warn]       val bufferAcc = sc.accumulableCollection(mutable.ArrayBuffer[Int]())
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:147: method accumulableCollection in class SparkContext is deprecated: use AccumulatorV2
[warn]       val mapAcc = sc.accumulableCollection(mutable.HashMap[Int, String]())
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:170: class Accumulable in package spark is deprecated: use AccumulatorV2
[warn]       val acc: Accumulable[mutable.Set[Any], Any] = sc.accumulable(new mutable.HashSet[Any]())
[warn]                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:170: method accumulable in class SparkContext is deprecated: use AccumulatorV2
[warn]       val acc: Accumulable[mutable.Set[Any], Any] = sc.accumulable(new mutable.HashSet[Any]())
[warn]                                                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:184: class Accumulable in package spark is deprecated: use AccumulatorV2
[warn]     var acc: Accumulable[mutable.Set[Any], Any] = sc.accumulable(new mutable.HashSet[Any]())
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:184: method accumulable in class SparkContext is deprecated: use AccumulatorV2
[warn]     var acc: Accumulable[mutable.Set[Any], Any] = sc.accumulable(new mutable.HashSet[Any]())
[warn]                                                      ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:225: class Accumulator in package spark is deprecated: use AccumulatorV2
[warn]     val acc = new Accumulator("", StringAccumulatorParam, Some("darkness"))
[warn]                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:225: object StringAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2
[warn]     val acc = new Accumulator("", StringAccumulatorParam, Some("darkness"))
[warn]                                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/status/AppStatusListenerSuite.scala:1194: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]       SparkListenerTaskEnd(stage1.stageId, stage1.attemptId, "taskType", Success, tasks(1), null))
[warn]                                                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/status/AppStatusListenerSuite.scala:1198: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]       SparkListenerTaskEnd(stage1.stageId, stage1.attemptId, "taskType", Success, tasks(0), null))
[warn]                                                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/status/AppStatusListenerSuite.scala:1264: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]       SparkListenerTaskEnd(stage1.stageId, stage1.attemptId, "taskType", Success, tasks(0), null))
[warn]                                                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/status/AppStatusListenerSuite.scala:1278: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]       SparkListenerTaskEnd(stage1.stageId, stage1.attemptId, "taskType",
[warn]                                                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/util/AccumulatorV2Suite.scala:132: object StringAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2
[warn]     val acc = new LegacyAccumulatorWrapper("default", AccumulatorParam.StringAccumulatorParam)
[warn]                                                                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/util/AccumulatorV2Suite.scala:132: object AccumulatorParam in package spark is deprecated: use AccumulatorV2
[warn]     val acc = new LegacyAccumulatorWrapper("default", AccumulatorParam.StringAccumulatorParam)
[warn]                                                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/util/AccumulatorV2Suite.scala:168: trait AccumulatorParam in package spark is deprecated: use AccumulatorV2
[warn]     val param = new AccumulatorParam[MyData] {
[warn]                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/sparktest/ImplicitSuite.scala:79: method accumulator in class SparkContext is deprecated: use AccumulatorV2
[warn]     sc.accumulator(123.4)
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/sparktest/ImplicitSuite.scala:79: object DoubleAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2
[warn]     sc.accumulator(123.4)
[warn]                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/sparktest/ImplicitSuite.scala:84: method accumulator in class SparkContext is deprecated: use AccumulatorV2
[warn]     sc.accumulator(123)
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/sparktest/ImplicitSuite.scala:84: object IntAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2
[warn]     sc.accumulator(123)
[warn]                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/sparktest/ImplicitSuite.scala:89: method accumulator in class SparkContext is deprecated: use AccumulatorV2
[warn]     sc.accumulator(123L)
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/sparktest/ImplicitSuite.scala:89: object LongAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2
[warn]     sc.accumulator(123L)
[warn]                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/sparktest/ImplicitSuite.scala:94: method accumulator in class SparkContext is deprecated: use AccumulatorV2
[warn]     sc.accumulator(123F)
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/sparktest/ImplicitSuite.scala:94: object FloatAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2
[warn]     sc.accumulator(123F)
[warn]                   ^
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/jpmml/pmml-model/1.2.15/pmml-model-1.2.15.jar ...
[info] 	[SUCCESSFUL ] org.jpmml#pmml-model;1.2.15!pmml-model.jar (481ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/jpmml/pmml-schema/1.2.15/pmml-schema-1.2.15.jar ...
[info] 	[SUCCESSFUL ] org.jpmml#pmml-schema;1.2.15!pmml-schema.jar (347ms)
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9
[warn] 	    +- org.apache.arrow:arrow-vector:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.arrow:arrow-memory:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.spark:spark-unsafe_2.11:2.4.7-SNAPSHOT  (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.7-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.hadoop:hadoop-common:2.6.5              (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-network-common_2.11:2.4.7-SNAPSHOT (depends on 1.3.9)
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.7-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}repl...
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/calcite/calcite-core/1.2.0-incubating/calcite-core-1.2.0-incubating.jar ...
[info] 	[SUCCESSFUL ] org.apache.calcite#calcite-core;1.2.0-incubating!calcite-core.jar (615ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/calcite/calcite-linq4j/1.2.0-incubating/calcite-linq4j-1.2.0-incubating.jar ...
[info] 	[SUCCESSFUL ] org.apache.calcite#calcite-linq4j;1.2.0-incubating!calcite-linq4j.jar (444ms)
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9
[warn] 	    +- org.apache.arrow:arrow-vector:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.arrow:arrow-memory:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.spark:spark-unsafe_2.11:2.4.7-SNAPSHOT  (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.7-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.hadoop:hadoop-common:2.6.5              (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-network-common_2.11:2.4.7-SNAPSHOT (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-hive_2.11:2.4.7-SNAPSHOT    (depends on 1.3.9)
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.7-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}hive-thriftserver...
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9
[warn] 	    +- org.apache.arrow:arrow-vector:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.arrow:arrow-memory:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.spark:spark-unsafe_2.11:2.4.7-SNAPSHOT  (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.7-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.hadoop:hadoop-common:2.6.5              (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-network-common_2.11:2.4.7-SNAPSHOT (depends on 1.3.9)
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.7-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}examples...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/catalyst/target/scala-2.11/spark-catalyst_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 346 Scala sources and 93 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/target/scala-2.11/classes...
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9
[warn] 	    +- org.apache.arrow:arrow-vector:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.arrow:arrow-memory:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.spark:spark-unsafe_2.11:2.4.7-SNAPSHOT  (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.7-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.hadoop:hadoop-common:2.6.5              (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-network-common_2.11:2.4.7-SNAPSHOT (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-hive_2.11:2.4.7-SNAPSHOT    (depends on 1.3.9)
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.7-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9
[warn] 	    +- org.apache.arrow:arrow-vector:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.arrow:arrow-memory:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.spark:spark-unsafe_2.11:2.4.7-SNAPSHOT  (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.7-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.hadoop:hadoop-common:2.6.5              (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-network-common_2.11:2.4.7-SNAPSHOT (depends on 1.3.9)
[warn] 
[warn] 	* org.scala-lang.modules:scala-parser-combinators_2.11:1.1.0 is selected over 1.0.4
[warn] 	    +- org.apache.spark:spark-catalyst_2.11:2.4.7-SNAPSHOT (depends on 1.1.0)
[warn] 	    +- org.apache.spark:spark-mllib_2.11:2.4.7-SNAPSHOT   (depends on 1.1.0)
[warn] 	    +- org.scala-lang:scala-compiler:2.11.12              (depends on 1.0.4)
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.7-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[warn] 40 warnings found
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:128: value ENABLE_JOB_SUMMARY in object ParquetOutputFormat is deprecated: see corresponding Javadoc for more information.
[warn]       && conf.get(ParquetOutputFormat.ENABLE_JOB_SUMMARY) == null) {
[warn]                                       ^
[info] Note: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/java/test/org/apache/spark/JavaAPISuite.java uses or overrides a deprecated API.
[info] Note: Recompile with -Xlint:deprecation for details.
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/com/github/scopt/scopt_2.11/3.7.0/scopt_2.11-3.7.0.jar ...
[info] 	[SUCCESSFUL ] com.github.scopt#scopt_2.11;3.7.0!scopt_2.11.jar (712ms)
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9
[warn] 	    +- org.apache.arrow:arrow-vector:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.arrow:arrow-memory:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.spark:spark-unsafe_2.11:2.4.7-SNAPSHOT  (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.7-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.hadoop:hadoop-common:2.6.5              (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-network-common_2.11:2.4.7-SNAPSHOT (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-hive_2.11:2.4.7-SNAPSHOT    (depends on 1.3.9)
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.7-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}assembly...
[info] Compiling 20 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/graphx/target/scala-2.11/test-classes...
[info] Compiling 5 Scala sources and 3 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/target/scala-2.11/test-classes...
[info] Compiling 40 Scala sources and 9 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/streaming/target/scala-2.11/test-classes...
[info] Compiling 28 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/kubernetes/core/target/scala-2.11/test-classes...
[info] Compiling 10 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/target/scala-2.11/test-classes...
[info] Compiling 200 Scala sources and 5 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/catalyst/target/scala-2.11/test-classes...
[info] Compiling 23 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/yarn/target/scala-2.11/test-classes...
[info] Compiling 3 Scala sources and 3 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/target/scala-2.11/test-classes...
[info] Compiling 6 Scala sources and 4 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/target/scala-2.11/test-classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/spark-core_2.11-2.4.7-SNAPSHOT-tests.jar ...
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/test/scala/org/apache/spark/streaming/flume/FlumePollingStreamSuite.scala:117: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]       FlumeUtils.createPollingStream(ssc, addresses, StorageLevel.MEMORY_AND_DISK,
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/test/scala/org/apache/spark/streaming/flume/FlumeStreamSuite.scala:83: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val flumeStream = FlumeUtils.createStream(
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:96: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:103: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     var offsetRanges = Array[OffsetRange]()
[warn]                              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:107: trait HasOffsetRanges in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges = rdd.asInstanceOf[HasOffsetRanges].offsetRanges
[warn]                                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:148: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val kc = new KafkaCluster(kafkaParams)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:163: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:194: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val kc = new KafkaCluster(kafkaParams)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:209: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder, String](
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:251: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:340: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:414: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val kc = new KafkaCluster(kafkaParams)
[warn]                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:494: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val kc = new KafkaCluster(kafkaParams)
[warn]                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:565: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       kafkaStream: DStream[(K, V)]): Seq[(Time, Array[OffsetRange])] = {
[warn]                                      ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaClusterSuite.scala:30: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   private var kc: KafkaCluster = null
[warn]                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaClusterSuite.scala:40: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     kc = new KafkaCluster(Map("metadata.broker.list" -> kafkaTestUtils.brokerAddress))
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:64: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val offsetRanges = Array(OffsetRange(topic, 0, 0, messages.size))
[warn]                              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:66: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val rdd = KafkaUtils.createRDD[String, String, StringDecoder, StringDecoder](
[warn]               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:80: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val emptyRdd = KafkaUtils.createRDD[String, String, StringDecoder, StringDecoder](
[warn]                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:81: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       sc, kafkaParams, Array(OffsetRange(topic, 0, 0, 0)))
[warn]                              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:86: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val badRanges = Array(OffsetRange(topic, 0, 0, messages.size + 1))
[warn]                           ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:88: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       KafkaUtils.createRDD[String, String, StringDecoder, StringDecoder](
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:102: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val kc = new KafkaCluster(kafkaParams)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:113: trait HasOffsetRanges in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val ranges = rdd.get.asInstanceOf[HasOffsetRanges].offsetRanges
[warn]                                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:148: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   private def getRdd(kc: KafkaCluster, topics: Set[String]) = {
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:161: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]               OffsetRange(tp.topic, tp.partition, fromOffset, until(tp).offset)
[warn]               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:165: object Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]               tp -> Broker(lo.host, lo.port)
[warn]                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:168: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]           KafkaUtils.createRDD[String, String, StringDecoder, StringDecoder, String](
[warn]           ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaStreamSuite.scala:66: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val stream = KafkaUtils.createStream[String, String, StringDecoder, StringDecoder](
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/ReliableKafkaStreamSuite.scala:96: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val stream = KafkaUtils.createStream[String, String, StringDecoder, StringDecoder](
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/ReliableKafkaStreamSuite.scala:130: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val stream = KafkaUtils.createStream[String, String, StringDecoder, StringDecoder](
[warn]                  ^
[warn] two warnings found
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/kubernetes/core/src/test/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorPodsAllocatorSuite.scala:168: non-variable type argument org.apache.spark.deploy.k8s.KubernetesExecutorSpecificConf in type org.apache.spark.deploy.k8s.KubernetesConf[org.apache.spark.deploy.k8s.KubernetesExecutorSpecificConf] is unchecked since it is eliminated by erasure
[warn]         if (!argument.isInstanceOf[KubernetesConf[KubernetesExecutorSpecificConf]]) {
[warn]                                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/test/scala/org/apache/spark/streaming/kafka010/DirectKafkaStreamSuite.scala:253: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]       s.consumer.poll(0)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/test/scala/org/apache/spark/streaming/kafka010/DirectKafkaStreamSuite.scala:309: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]       s.consumer.poll(0)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/test/scala/org/apache/spark/streaming/kafka010/DirectKafkaStreamSuite.scala:473: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     consumer.poll(0)
[warn]              ^
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/target/scala-2.11/spark-streaming-flume_2.11-2.4.7-SNAPSHOT-tests.jar ...
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/test/scala/org/apache/spark/streaming/kafka010/KafkaTestUtils.scala:60: class ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead.
[warn]   private var zkUtils: ZkUtils = _
[warn]                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/test/scala/org/apache/spark/streaming/kafka010/KafkaTestUtils.scala:88: class ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead.
[warn]   def zookeeperClient: ZkUtils = {
[warn]                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/test/scala/org/apache/spark/streaming/kafka010/KafkaTestUtils.scala:100: object ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead.
[warn]     zkUtils = ZkUtils(s"$zkHost:$zkPort", zkSessionTimeout, zkConnectionTimeout, false)
[warn]               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/test/scala/org/apache/spark/streaming/kafka010/KafkaTestUtils.scala:178: method createTopic in object AdminUtils is deprecated: This method is deprecated and will be replaced by kafka.zk.AdminZkClient.
[warn]     AdminUtils.createTopic(zkUtils, topic, partitions, 1, config)
[warn]                ^
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:113: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]         Resource.newBuilder().setRole("*")
[warn]                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:116: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]         Resource.newBuilder().setRole("*")
[warn]                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:121: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]         Resource.newBuilder().setRole("role2")
[warn]                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:124: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]         Resource.newBuilder().setRole("role2")
[warn]                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:138: method valueOf in object Status is deprecated: see corresponding Javadoc for more information.
[warn]     ).thenReturn(Status.valueOf(1))
[warn]                         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:151: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]     assert(cpus.exists(_.getRole() == "role2"))
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:152: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]     assert(cpus.exists(_.getRole() == "*"))
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:155: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]     assert(mem.exists(_.getRole() == "role2"))
[warn]                         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:156: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]     assert(mem.exists(_.getRole() == "*"))
[warn]                         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:417: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]         Resource.newBuilder().setRole("*")
[warn]                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:420: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]         Resource.newBuilder().setRole("*")
[warn]                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:271: method valueOf in object Status is deprecated: see corresponding Javadoc for more information.
[warn]     ).thenReturn(Status.valueOf(1))
[warn]                         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:272: method valueOf in object Status is deprecated: see corresponding Javadoc for more information.
[warn]     when(driver.declineOffer(mesosOffers.get(1).getId)).thenReturn(Status.valueOf(1))
[warn]                                                                           ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:273: method valueOf in object Status is deprecated: see corresponding Javadoc for more information.
[warn]     when(driver.declineOffer(mesosOffers.get(2).getId)).thenReturn(Status.valueOf(1))
[warn]                                                                           ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:299: method valueOf in object Status is deprecated: see corresponding Javadoc for more information.
[warn]     when(driver.declineOffer(mesosOffers2.get(0).getId)).thenReturn(Status.valueOf(1))
[warn]                                                                            ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:325: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       .setRole("prod")
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:329: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       .setRole("prod")
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:334: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       .setRole("dev")
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:339: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       .setRole("dev")
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:380: method valueOf in object Status is deprecated: see corresponding Javadoc for more information.
[warn]     ).thenReturn(Status.valueOf(1))
[warn]                         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:397: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]     assert(cpusDev.getRole.equals("dev"))
[warn]                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:400: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]       r.getName.equals("mem") && r.getScalar.getValue.equals(484.0) && r.getRole.equals("prod")
[warn]                                                                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:403: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]       r.getName.equals("cpus") && r.getScalar.getValue.equals(1.0) && r.getRole.equals("prod")
[warn]                                                                         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtilsSuite.scala:54: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]     role.foreach { r => builder.setRole(r) }
[warn]                                 ^
[warn] 29 warnings found
[info] Note: Some input files use or override a deprecated API.
[info] Note: Recompile with -Xlint:deprecation for details.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/target/scala-2.11/spark-streaming-kafka-0-8_2.11-2.4.7-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] 7 warnings found
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/target/scala-2.11/spark-streaming-kafka-0-10_2.11-2.4.7-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] 24 warnings found
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/target/scala-2.11/spark-mesos_2.11-2.4.7-SNAPSHOT-tests.jar ...
[warn] one warning found
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/kubernetes/core/target/scala-2.11/spark-kubernetes_2.11-2.4.7-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9
[warn] 	    +- org.apache.arrow:arrow-vector:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.arrow:arrow-memory:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.spark:spark-unsafe_2.11:2.4.7-SNAPSHOT  (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.7-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.hadoop:hadoop-common:2.6.5              (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-network-common_2.11:2.4.7-SNAPSHOT (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-hive_2.11:2.4.7-SNAPSHOT    (depends on 1.3.9)
[warn] 
[warn] 	* org.scala-lang.modules:scala-parser-combinators_2.11:1.1.0 is selected over 1.0.4
[warn] 	    +- org.scala-lang:scala-compiler:2.11.12              (depends on 1.0.4)
[warn] 	    +- org.apache.spark:spark-catalyst_2.11:2.4.7-SNAPSHOT (depends on 1.0.4)
[warn] 	    +- org.apache.spark:spark-mllib_2.11:2.4.7-SNAPSHOT   (depends on 1.0.4)
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.7-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/yarn/target/scala-2.11/spark-yarn_2.11-2.4.7-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/graphx/target/scala-2.11/spark-graphx_2.11-2.4.7-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:360: class ParquetInputSplit in package hadoop is deprecated: see corresponding Javadoc for more information.
[warn]         new org.apache.parquet.hadoop.ParquetInputSplit(
[warn]                                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:371: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]         ParquetFileReader.readFooter(sharedConf, filePath, SKIP_ROW_GROUPS).getFileMetaData
[warn]                           ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:544: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]           ParquetFileReader.readFooter(
[warn]                             ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn]                                                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn]            ^
[info] Compiling 8 Scala sources and 2 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/target/scala-2.11/test-classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/streaming/target/scala-2.11/spark-streaming_2.11-2.4.7-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/test/scala/org/apache/spark/streaming/kinesis/KinesisInputDStreamBuilderSuite.scala:163: method initialPositionInStream in class Builder is deprecated: use initialPosition(initialPosition: KinesisInitialPosition)
[warn]         .initialPositionInStream(InitialPositionInStream.AT_TIMESTAMP)
[warn]          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/test/scala/org/apache/spark/streaming/kinesis/KinesisStreamSuite.scala:103: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]     val kinesisStream1 = KinesisUtils.createStream(ssc, "myAppName", "mySparkStream",
[warn]                                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/test/scala/org/apache/spark/streaming/kinesis/KinesisStreamSuite.scala:106: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]     val kinesisStream2 = KinesisUtils.createStream(ssc, "myAppName", "mySparkStream",
[warn]                                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/test/scala/org/apache/spark/streaming/kinesis/KinesisStreamSuite.scala:113: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]     val inputStream = KinesisUtils.createStream(ssc, appName, "dummyStream",
[warn]                                    ^
[warn] four warnings found
[info] Note: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/test/java/org/apache/spark/streaming/kinesis/JavaKinesisInputDStreamBuilderSuite.java uses or overrides a deprecated API.
[info] Note: Recompile with -Xlint:deprecation for details.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/target/scala-2.11/spark-streaming-kinesis-asl_2.11-2.4.7-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] 6 warnings found
[info] Note: Some input files use or override a deprecated API.
[info] Note: Recompile with -Xlint:deprecation for details.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:128: value ENABLE_JOB_SUMMARY in object ParquetOutputFormat is deprecated: see corresponding Javadoc for more information.
[warn]       && conf.get(ParquetOutputFormat.ENABLE_JOB_SUMMARY) == null) {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:360: class ParquetInputSplit in package hadoop is deprecated: see corresponding Javadoc for more information.
[warn]         new org.apache.parquet.hadoop.ParquetInputSplit(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:371: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]         ParquetFileReader.readFooter(sharedConf, filePath, SKIP_ROW_GROUPS).getFileMetaData
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:544: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]           ParquetFileReader.readFooter(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/target/scala-2.11/spark-sql_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 10 Scala sources and 1 Java source to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/target/scala-2.11/classes...
[info] Compiling 29 Scala sources and 2 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive/target/scala-2.11/classes...
[info] Compiling 20 Scala sources and 1 Java source to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/target/scala-2.11/classes...
[info] Compiling 304 Scala sources and 5 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/target/scala-2.11/classes...
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaDataConsumer.scala:470: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     val p = consumer.poll(pollTimeoutMs)
[warn]                      ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:119: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]     consumer.poll(0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:139: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:187: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]       consumer.poll(0)
[warn]                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:217: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]       consumer.poll(0)
[warn]                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:293: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]           consumer.poll(0)
[warn]                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/main/java/org/apache/spark/sql/avro/SparkAvroKeyOutputFormat.java:55:  [unchecked] unchecked call to SparkAvroKeyRecordWriter(Schema,GenericData,CodecFactory,OutputStream,int,Map<String,String>) as a member of the raw type SparkAvroKeyRecordWriter
[warn]       return new SparkAvroKeyRecordWriter(
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/main/java/org/apache/spark/sql/avro/SparkAvroKeyOutputFormat.java:55:  [unchecked] unchecked conversion
[warn]       return new SparkAvroKeyRecordWriter(
[warn]              ^
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/target/scala-2.11/spark-avro_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[warn] 6 warnings found
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:119: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]     consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:139: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:187: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]       consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:217: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]       consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:293: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]           consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaDataConsumer.scala:470: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     val p = consumer.poll(pollTimeoutMs)
[warn] 
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/target/scala-2.11/spark-sql-kafka-0-10_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[warn] there were 16 deprecation warnings; re-run with -deprecation for details
[warn] one warning found
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Compiling 290 Scala sources and 33 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/target/scala-2.11/test-classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/catalyst/target/scala-2.11/spark-catalyst_2.11-2.4.7-SNAPSHOT-tests.jar ...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive/target/scala-2.11/spark-hive_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 12 Scala sources and 171 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/target/scala-2.11/classes...
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/src/gen/java/org/apache/hive/service/cli/thrift/TArrayTypeEntry.java:266:  [unchecked] unchecked call to read(TProtocol,T) as a member of the raw type IScheme
[warn]     schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
[warn]                                                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/where T is a type-variable:T extends TBase declared in interface IScheme
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/src/gen/java/org/apache/hive/service/cli/thrift/TArrayTypeEntry.java:270:  [unchecked] unchecked call to write(TProtocol,T) as a member of the raw type IScheme
[warn]     schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
[warn]                                                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/where T is a type-variable:T extends TBase declared in interface IScheme
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/src/gen/java/org/apache/hive/service/cli/thrift/TArrayTypeEntry.java:313:  [unchecked] getScheme() in TArrayTypeEntryStandardSchemeFactory implements <S>getScheme() in SchemeFactory
[warn]     public TArrayTypeEntryStandardScheme getScheme() {
[warn]                                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/return type requires unchecked conversion from TArrayTypeEntryStandardScheme to S
[warn]   where S is a type-variable:S extends IScheme declared in method <S>getScheme()
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/src/gen/java/org/apache/hive/service/cli/thrift/TArrayTypeEntry.java:361:  [unchecked] getScheme() in TArrayTypeEntryTupleSchemeFactory implements <S>getScheme() in SchemeFactory
[warn]     public TArrayTypeEntryTupleScheme getScheme() {
[warn]                                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/return type requires unchecked conversion from TArrayTypeEntryTupleScheme to S
[warn]   where S is a type-variable:S extends IScheme declared in method <S>getScheme()
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/src/gen/java/org/apache/hive/service/cli/thrift/TBinaryColumn.java:240:  [unchecked] unchecked cast
[warn]         setValues((List<ByteBuffer>)value);
[warn]                                     ^
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/target/scala-2.11/spark-hive-thriftserver_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:138: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn] object OneHotEncoder extends DefaultParamsReadable[OneHotEncoder] {
[warn]                              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:141: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]   override def load(path: String): OneHotEncoder = super.load(path)
[warn]                                    ^
[warn] two warnings found
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:138: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn] object OneHotEncoder extends DefaultParamsReadable[OneHotEncoder] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:141: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]   override def load(path: String): OneHotEncoder = super.load(path)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/target/scala-2.11/spark-mllib_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 6 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/target/scala-2.11/classes...
[info] Compiling 191 Scala sources and 128 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/examples/target/scala-2.11/classes...
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:53: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path
[warn]       if (addedClasspath != "") {
[warn]           ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:54: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path
[warn]         settings.classpath append addedClasspath
[warn]                                   ^
[warn] two warnings found
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:53: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path
[warn]       if (addedClasspath != "") {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:54: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path
[warn]         settings.classpath append addedClasspath
[warn] 
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/target/scala-2.11/spark-repl_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 3 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/target/scala-2.11/test-classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/assembly/target/scala-2.11/jars/spark-assembly_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/assembly/target/scala-2.11/jars/spark-assembly_2.11-2.4.7-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/target/scala-2.11/spark-repl_2.11-2.4.7-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/examples/target/scala-2.11/jars/spark-examples_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/examples/target/scala-2.11/jars/spark-examples_2.11-2.4.7-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/sources/TextSocketStreamSuite.scala:393: non-variable type argument String in type (String, java.sql.Timestamp) is unchecked since it is eliminated by erasure
[warn]             .isInstanceOf[(String, Timestamp)])
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/sources/TextSocketStreamSuite.scala:392: non-variable type argument String in type (String, java.sql.Timestamp) is unchecked since it is eliminated by erasure
[warn]           assert(r.get().get(0, TextSocketReader.SCHEMA_TIMESTAMP)
[warn]                 ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQueryStatusAndProgressSuite.scala:204: postfix operator minute should be enabled
[warn] by making the implicit value scala.language.postfixOps visible.
[warn] This can be achieved by adding the import clause 'import scala.language.postfixOps'
[warn] or by setting the compiler option -language:postfixOps.
[warn] See the Scaladoc for value scala.language.postfixOps for a discussion
[warn] why the feature should be explicitly enabled.
[warn]         eventually(timeout(1 minute)) {
[warn]                              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala:693: a pure expression does nothing in statement position; you may be omitting necessary parentheses
[warn]       q1
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala:230: method explode in class Dataset is deprecated: use flatMap() or select() with functions.explode() instead
[warn]       df.explode("words", "word") { word: String => word.split(" ").toSeq }.select('word),
[warn]          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala:238: method explode in class Dataset is deprecated: use flatMap() or select() with functions.explode() instead
[warn]       df.explode('letters) {
[warn]          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala:288: method explode in class Dataset is deprecated: use flatMap() or select() with functions.explode() instead
[warn]       df.explode($"*") { case Row(prefix: String, csv: String) =>
[warn]          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala:295: method explode in class Dataset is deprecated: use flatMap() or select() with functions.explode() instead
[warn]       df.explode('prefix, 'csv) { case Row(prefix: String, csv: String) =>
[warn]          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowFramesSuite.scala:228: method rangeBetween in class WindowSpec is deprecated: Use the version with Long parameter types
[warn]     val window = Window.partitionBy($"value").orderBy($"key").rangeBetween(lit(0), lit(2))
[warn]                                                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowFramesSuite.scala:242: method rangeBetween in class WindowSpec is deprecated: Use the version with Long parameter types
[warn]     val window = Window.partitionBy($"value").orderBy($"key").rangeBetween(currentRow, lit(2.5D))
[warn]                                                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowFramesSuite.scala:242: method currentRow in object functions is deprecated: Use Window.currentRow
[warn]     val window = Window.partitionBy($"value").orderBy($"key").rangeBetween(currentRow, lit(2.5D))
[warn]                                                                            ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowFramesSuite.scala:259: method rangeBetween in class WindowSpec is deprecated: Use the version with Long parameter types
[warn]       .rangeBetween(currentRow, lit(CalendarInterval.fromString("interval 23 days 4 hours")))
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowFramesSuite.scala:259: method currentRow in object functions is deprecated: Use Window.currentRow
[warn]       .rangeBetween(currentRow, lit(CalendarInterval.fromString("interval 23 days 4 hours")))
[warn]                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/ProcessingTimeSuite.scala:30: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn]     def getIntervalMs(trigger: Trigger): Long = trigger.asInstanceOf[ProcessingTime].intervalMs
[warn]                                                                      ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetCompatibilityTest.scala:49: method readAllFootersInParallel in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]       ParquetFileReader.readAllFootersInParallel(hadoopConf, parquetFiles, true)
[warn]                         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetInteroperabilitySuite.scala:178: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]                   ParquetFileReader.readFooter(hadoopConf, part.getPath, NO_FILTER)
[warn]                                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetTest.scala:133: method writeMetadataFile in object ParquetFileWriter is deprecated: see corresponding Javadoc for more information.
[warn]     ParquetFileWriter.writeMetadataFile(configuration, path, Seq(footer).asJava)
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetTest.scala:148: method writeMetadataFile in object ParquetFileWriter is deprecated: see corresponding Javadoc for more information.
[warn]     ParquetFileWriter.writeMetadataFile(configuration, path, Seq(footer).asJava)
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetTest.scala:154: method readAllFootersInParallel in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]     ParquetFileReader.readAllFootersInParallel(configuration, fs.getFileStatus(path)).asScala.toSeq
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetTest.scala:158: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]     ParquetFileReader.readFooter(
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/ProcessingTimeExecutorSuite.scala:55: method apply in object ProcessingTime is deprecated: use Trigger.ProcessingTime(interval)
[warn]     val executor = ProcessingTimeExecutor(ProcessingTime("1000 milliseconds"), clock)
[warn]                                           ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala:316: method apply in object ProcessingTime is deprecated: use Trigger.ProcessingTime(interval)
[warn]       StartStream(ProcessingTime("10 seconds"), new StreamManualClock),
[warn]                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala:357: method apply in object ProcessingTime is deprecated: use Trigger.ProcessingTime(interval)
[warn]       StartStream(ProcessingTime("10 seconds"), new StreamManualClock(60 * 1000)),
[warn]                   ^
[warn] 23 warnings found
[info] Note: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/java/test/org/apache/spark/sql/JavaDataFrameSuite.java uses or overrides a deprecated API.
[info] Note: Recompile with -Xlint:deprecation for details.
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Compiling 9 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/target/scala-2.11/test-classes...
[info] Compiling 5 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/target/scala-2.11/test-classes...
[info] Compiling 14 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/target/scala-2.11/test-classes...
[info] Compiling 193 Scala sources and 66 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/target/scala-2.11/test-classes...
[info] Compiling 88 Scala sources and 17 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive/target/scala-2.11/test-classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/target/scala-2.11/spark-sql_2.11-2.4.7-SNAPSHOT-tests.jar ...
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/HiveCliSessionStateSuite.scala:31: a pure expression does nothing in statement position; you may be omitting necessary parentheses
[warn]     try f finally SessionState.detachSession()
[warn]         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaContinuousTest.scala:76: reflective access of structural type member value activeTaskIdCount should be enabled
[warn] by making the implicit value scala.language.reflectiveCalls visible.
[warn] This can be achieved by adding the import clause 'import scala.language.reflectiveCalls'
[warn] or by setting the compiler option -language:reflectiveCalls.
[warn] See the Scaladoc for value scala.language.reflectiveCalls for a discussion
[warn] why the feature should be explicitly enabled.
[warn]       assert(tasksEndedListener.activeTaskIdCount.get() == 0)
[warn]                                 ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:141: constructor Field in class Field is deprecated: see corresponding Javadoc for more information.
[warn]         Seq(new Field("null", Schema.create(Type.NULL), "doc", null)).asJava
[warn]             ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:164: constructor Field in class Field is deprecated: see corresponding Javadoc for more information.
[warn]         val fields = Seq(new Field("field1", union, "doc", null)).asJava
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:192: constructor Field in class Field is deprecated: see corresponding Javadoc for more information.
[warn]         val fields = Seq(new Field("field1", union, "doc", null)).asJava
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:224: constructor Field in class Field is deprecated: see corresponding Javadoc for more information.
[warn]         val fields = Seq(new Field("field1", union, "doc", null)).asJava
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:250: constructor Field in class Field is deprecated: see corresponding Javadoc for more information.
[warn]       val fields = Seq(new Field("field1", UnionOfOne, "doc", null)).asJava
[warn]                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:303: constructor Field in class Field is deprecated: see corresponding Javadoc for more information.
[warn]         new Field("field1", complexUnionType, "doc", null),
[warn]         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:304: constructor Field in class Field is deprecated: see corresponding Javadoc for more information.
[warn]         new Field("field2", complexUnionType, "doc", null),
[warn]         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:305: constructor Field in class Field is deprecated: see corresponding Javadoc for more information.
[warn]         new Field("field3", complexUnionType, "doc", null),
[warn]         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:306: constructor Field in class Field is deprecated: see corresponding Javadoc for more information.
[warn]         new Field("field4", complexUnionType, "doc", null)
[warn]         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:970: constructor Field in class Field is deprecated: see corresponding Javadoc for more information.
[warn]       val avroField = new Field(name, avroType, "", null)
[warn]                       ^
[warn] one warning found
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/target/scala-2.11/spark-hive-thriftserver_2.11-2.4.7-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] 10 warnings found
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/target/scala-2.11/spark-avro_2.11-2.4.7-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:66: class ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead.
[warn]   private var zkUtils: ZkUtils = _
[warn]                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:95: class ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead.
[warn]   def zookeeperClient: ZkUtils = {
[warn]                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:107: object ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead.
[warn]     zkUtils = ZkUtils(s"$zkHost:$zkPort", zkSessionTimeout, zkConnectionTimeout, false)
[warn]               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:198: method createTopic in object AdminUtils is deprecated: This method is deprecated and will be replaced by kafka.zk.AdminZkClient.
[warn]         AdminUtils.createTopic(zkUtils, topic, partitions, 1)
[warn]                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:225: method deleteTopic in object AdminUtils is deprecated: This method is deprecated and will be replaced by kafka.zk.AdminZkClient.
[warn]     AdminUtils.deleteTopic(zkUtils, topic)
[warn]                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:290: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     kc.poll(0)
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:304: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     kc.poll(0)
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:383: object ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead.
[warn]       !zkUtils.pathExists(getDeleteTopicPath(topic)),
[warn]                           ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:384: object ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead.
[warn]       s"${getDeleteTopicPath(topic)} still exists")
[warn]           ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:385: object ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead.
[warn]     assert(!zkUtils.pathExists(getTopicPath(topic)), s"${getTopicPath(topic)} still exists")
[warn]                                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:385: object ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead.
[warn]     assert(!zkUtils.pathExists(getTopicPath(topic)), s"${getTopicPath(topic)} still exists")
[warn]                                                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:409: class ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead.
[warn]       zkUtils: ZkUtils,
[warn]                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:421: method deleteTopic in object AdminUtils is deprecated: This method is deprecated and will be replaced by kafka.zk.AdminZkClient.
[warn]           AdminUtils.deleteTopic(zkUtils, topic)
[warn]                      ^
[warn] 14 warnings found
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/target/scala-2.11/spark-sql-kafka-0-10_2.11-2.4.7-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] there were 25 deprecation warnings; re-run with -deprecation for details
[warn] one warning found
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive/src/test/java/org/apache/spark/sql/hive/test/Complex.java:464:  [unchecked] unchecked cast
[warn]         setLint((List<Integer>)value);
[warn]                                ^
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive/target/scala-2.11/spark-hive_2.11-2.4.7-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/clustering/KMeansSuite.scala:120: method computeCost in class KMeansModel is deprecated: This method is deprecated and will be removed in 3.0.0. Use ClusteringEvaluator instead. You can also get the cost on the training dataset in the summary.
[warn]     assert(model.computeCost(dataset) < 0.1)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/clustering/KMeansSuite.scala:135: method computeCost in class KMeansModel is deprecated: This method is deprecated and will be removed in 3.0.0. Use ClusteringEvaluator instead. You can also get the cost on the training dataset in the summary.
[warn]     assert(model.computeCost(dataset) == summary.trainingCost)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/clustering/KMeansSuite.scala:206: method computeCost in class KMeansModel is deprecated: This method is deprecated and will be removed in 3.0.0. Use ClusteringEvaluator instead. You can also get the cost on the training dataset in the summary.
[warn]       model.computeCost(dataset)
[warn]             ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/feature/OneHotEncoderSuite.scala:46: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]     ParamsSuite.checkParams(new OneHotEncoder)
[warn]                                 ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/feature/OneHotEncoderSuite.scala:51: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]     val encoder = new OneHotEncoder()
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/feature/OneHotEncoderSuite.scala:74: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]     val encoder = new OneHotEncoder()
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/feature/OneHotEncoderSuite.scala:96: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]     val encoder = new OneHotEncoder()
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/feature/OneHotEncoderSuite.scala:110: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]     val encoder = new OneHotEncoder()
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/feature/OneHotEncoderSuite.scala:121: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]     val t = new OneHotEncoder()
[warn]                 ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/feature/OneHotEncoderSuite.scala:156: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]       val encoder = new OneHotEncoder()
[warn]                         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:52: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     var df = readImages(imagePath)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:55: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     df = readImages(imagePath, null, true, -1, false, 1.0, 0)
[warn]          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:58: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     df = readImages(imagePath, null, true, -1, true, 1.0, 0)
[warn]          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:62: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     df = readImages(imagePath, null, true, -1, true, 0.5, 0)
[warn]          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:69: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     val df = readImages(imagePath, null, false, 3, true, 1.0, 0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:74: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     val df = readImages(imagePath + "/kittens/DP153539.jpg", null, false, 3, true, 1.0, 0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:79: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     val df = readImages(imagePath + "/multi-channel/BGRA.png", null, false, 3, true, 1.0, 0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:84: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     val df = readImages(imagePath + "/kittens/not-image.txt", null, false, 3, true, 1.0, 0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:90: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     val df = readImages(imagePath + "/kittens/not-image.txt", null, false, 3, false, 1.0, 0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:96: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]       readImages(imagePath, null, true, 3, true, 1.1, 0)
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:103: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]       readImages(imagePath, null, true, 3, true, -0.1, 0)
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:109: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     val df = readImages(imagePath, null, true, 3, true, 0.0, 0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:114: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     val df = readImages(imagePath, sparkSession = spark, true, 3, true, 1.0, 0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:119: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     val df = readImages(imagePath, null, true, 3, true, 1.0, 0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:124: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     val df = readImages(imagePath, null, true, -3, true, 1.0, 0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:129: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     val df = readImages(imagePath, null, true, 0, true, 1.0, 0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:136: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     val images = readImages(imagePath + "/multi-channel/").collect
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/classification/LogisticRegressionSuite.scala:227: constructor LogisticRegressionWithSGD in class LogisticRegressionWithSGD is deprecated: Use ml.classification.LogisticRegression or LogisticRegressionWithLBFGS
[warn]     val lr = new LogisticRegressionWithSGD().setIntercept(true)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/classification/LogisticRegressionSuite.scala:303: constructor LogisticRegressionWithSGD in class LogisticRegressionWithSGD is deprecated: Use ml.classification.LogisticRegression or LogisticRegressionWithLBFGS
[warn]     val lr = new LogisticRegressionWithSGD().setIntercept(true)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/classification/LogisticRegressionSuite.scala:338: constructor LogisticRegressionWithSGD in class LogisticRegressionWithSGD is deprecated: Use ml.classification.LogisticRegression or LogisticRegressionWithLBFGS
[warn]     val lr = new LogisticRegressionWithSGD().setIntercept(true)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/classification/LogisticRegressionSuite.scala:919: object LogisticRegressionWithSGD in package classification is deprecated: Use ml.classification.LogisticRegression or LogisticRegressionWithLBFGS
[warn]     val model = LogisticRegressionWithSGD.train(points, 2)
[warn]                 ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/clustering/KMeansSuite.scala:369: method train in object KMeans is deprecated: Use train method without 'runs'
[warn]       val model = KMeans.train(points, 2, 2, 1, initMode)
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/evaluation/MulticlassMetricsSuite.scala:80: value precision in class MulticlassMetrics is deprecated: Use accuracy.
[warn]     assert(math.abs(metrics.accuracy - metrics.precision) < delta)
[warn]                                                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/evaluation/MulticlassMetricsSuite.scala:81: value recall in class MulticlassMetrics is deprecated: Use accuracy.
[warn]     assert(math.abs(metrics.accuracy - metrics.recall) < delta)
[warn]                                                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/evaluation/MulticlassMetricsSuite.scala:82: value fMeasure in class MulticlassMetrics is deprecated: Use accuracy.
[warn]     assert(math.abs(metrics.accuracy - metrics.fMeasure) < delta)
[warn]                                                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/LassoSuite.scala:58: constructor LassoWithSGD in class LassoWithSGD is deprecated: Use ml.regression.LinearRegression with elasticNetParam = 1.0. Note the default regParam is 0.01 for LassoWithSGD, but is 0.0 for LinearRegression.
[warn]     val ls = new LassoWithSGD()
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/LassoSuite.scala:102: constructor LassoWithSGD in class LassoWithSGD is deprecated: Use ml.regression.LinearRegression with elasticNetParam = 1.0. Note the default regParam is 0.01 for LassoWithSGD, but is 0.0 for LinearRegression.
[warn]     val ls = new LassoWithSGD()
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/LassoSuite.scala:156: object LassoWithSGD in package regression is deprecated: Use ml.regression.LinearRegression with elasticNetParam = 1.0. Note the default regParam is 0.01 for LassoWithSGD, but is 0.0 for LinearRegression.
[warn]     val model = LassoWithSGD.train(points, 2)
[warn]                 ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/LinearRegressionSuite.scala:49: constructor LinearRegressionWithSGD in class LinearRegressionWithSGD is deprecated: Use ml.regression.LinearRegression or LBFGS
[warn]     val linReg = new LinearRegressionWithSGD().setIntercept(true)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/LinearRegressionSuite.scala:75: constructor LinearRegressionWithSGD in class LinearRegressionWithSGD is deprecated: Use ml.regression.LinearRegression or LBFGS
[warn]     val linReg = new LinearRegressionWithSGD().setIntercept(false)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/LinearRegressionSuite.scala:106: constructor LinearRegressionWithSGD in class LinearRegressionWithSGD is deprecated: Use ml.regression.LinearRegression or LBFGS
[warn]     val linReg = new LinearRegressionWithSGD().setIntercept(false)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/LinearRegressionSuite.scala:163: object LinearRegressionWithSGD in package regression is deprecated: Use ml.regression.LinearRegression or LBFGS
[warn]     val model = LinearRegressionWithSGD.train(points, 2)
[warn]                 ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/RidgeRegressionSuite.scala:63: constructor LinearRegressionWithSGD in class LinearRegressionWithSGD is deprecated: Use ml.regression.LinearRegression or LBFGS
[warn]     val linearReg = new LinearRegressionWithSGD()
[warn]                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/RidgeRegressionSuite.scala:71: constructor RidgeRegressionWithSGD in class RidgeRegressionWithSGD is deprecated: Use ml.regression.LinearRegression with elasticNetParam = 0.0. Note the default regParam is 0.01 for RidgeRegressionWithSGD, but is 0.0 for LinearRegression.
[warn]     val ridgeReg = new RidgeRegressionWithSGD()
[warn]                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/RidgeRegressionSuite.scala:113: object RidgeRegressionWithSGD in package regression is deprecated: Use ml.regression.LinearRegression with elasticNetParam = 0.0. Note the default regParam is 0.01 for RidgeRegressionWithSGD, but is 0.0 for LinearRegression.
[warn]     val model = RidgeRegressionWithSGD.train(points, 2)
[warn]                 ^
[warn] 45 warnings found
[info] Note: Some input files use or override a deprecated API.
[info] Note: Recompile with -Xlint:deprecation for details.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/target/scala-2.11/spark-mllib_2.11-2.4.7-SNAPSHOT-tests.jar ...
[info] Done packaging.
[success] Total time: 881 s, completed Jul 11, 2020 6:48:56 AM
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:161: method isRunningLocally in class TaskContext is deprecated: Local execution was removed, so this always returns false
[warn]   override def isRunningLocally(): Boolean = taskContext.isRunningLocally()
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/scheduler/StageInfo.scala:59: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]   def attemptNumber(): Int = attemptId
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/api/r/RBackend.scala:102: method childGroup in class ServerBootstrap is deprecated: see corresponding Javadoc for more information.
[warn]     if (bootstrap != null && bootstrap.childGroup() != null) {
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/spark-core_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:633: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder](
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:647: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:648: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker]): JavaRDD[(Array[Byte], Array[Byte])] = {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:657: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:658: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker]): JavaRDD[Array[Byte]] = {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:670: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:671: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:673: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createRDD[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V](
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:676: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges.toArray(new Array[OffsetRange](offsetRanges.size())),
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:720: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val kc = new KafkaCluster(Map(kafkaParams.asScala.toSeq: _*))
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:721: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       KafkaUtils.getFromOffsets(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:725: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createDirectStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V](
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def createBroker(host: String, port: JInt): Broker = Broker(host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: object Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def createBroker(host: String, port: JInt): Broker = Broker(host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:740: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def offsetRangesOfKafkaRDD(rdd: RDD[_]): JList[OffsetRange] = {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:89: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   protected val kc = new KafkaCluster(kafkaParams)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:172: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       OffsetRange(tp.topic, tp.partition, fo, uo.offset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:217: object KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val leaders = KafkaCluster.checkErrors(kc.findLeaders(topics))
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:222: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]            context.sparkContext, kafkaParams, b.map(OffsetRange(_)), leaders, messageHandler)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:58: trait HasOffsetRanges in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   ) extends RDD[R](sc, Nil) with Logging with HasOffsetRanges {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val offsetRanges: Array[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val offsetRanges: Array[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:152: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val kc = new KafkaCluster(kafkaParams)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:268: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]         OffsetRange(tp.topic, tp.partition, fo, uo.offset)
[warn] 
[info] Checking every *.class/*.jar file's SHA-1.
[warn] Strategy 'discard' was applied to a file
[warn] Strategy 'filterDistinctLines' was applied to 7 files
[warn] Strategy 'first' was applied to 95 files
[info] SHA-1: 9e589a6cc9a4827428e54646bd11380d3cb48a43
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8-assembly/target/scala-2.11/spark-streaming-kafka-0-8-assembly-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[success] Total time: 33 s, completed Jul 11, 2020 6:49:29 AM
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:161: method isRunningLocally in class TaskContext is deprecated: Local execution was removed, so this always returns false
[warn]   override def isRunningLocally(): Boolean = taskContext.isRunningLocally()
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/scheduler/StageInfo.scala:59: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]   def attemptNumber(): Int = attemptId
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/api/r/RBackend.scala:102: method childGroup in class ServerBootstrap is deprecated: see corresponding Javadoc for more information.
[warn]     if (bootstrap != null && bootstrap.childGroup() != null) {
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/spark-core_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:259: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val dstream = FlumeUtils.createStream(jssc, hostname, port, storageLevel, enableDecompression)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:275: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val dstream = FlumeUtils.createPollingStream(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createPollingStream(ssc, host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createPollingStream(ssc, host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumeEventCount.scala:59: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createStream(ssc, host, port, StorageLevel.MEMORY_ONLY_SER_2)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Checking every *.class/*.jar file's SHA-1.
[warn] Strategy 'discard' was applied to a file
[warn] Strategy 'filterDistinctLines' was applied to 7 files
[warn] Strategy 'first' was applied to 88 files
[info] SHA-1: dbe2d4d871327a60cc33eb5cb386178d7d8807e3
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-assembly/target/scala-2.11/spark-streaming-flume-assembly-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[success] Total time: 34 s, completed Jul 11, 2020 6:50:03 AM
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:161: method isRunningLocally in class TaskContext is deprecated: Local execution was removed, so this always returns false
[warn]   override def isRunningLocally(): Boolean = taskContext.isRunningLocally()
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/scheduler/StageInfo.scala:59: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]   def attemptNumber(): Int = attemptId
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/api/r/RBackend.scala:102: method childGroup in class ServerBootstrap is deprecated: see corresponding Javadoc for more information.
[warn]     if (bootstrap != null && bootstrap.childGroup() != null) {
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/spark-core_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:140: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]   private val client = new AmazonKinesisClient(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:150: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]   client.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:219: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information.
[warn]     getRecordsRequest.setRequestCredentials(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:240: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information.
[warn]     getShardIteratorRequest.setRequestCredentials(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:606: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]       KinesisUtils.createStream(jssc.ssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:613: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]         KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:616: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]         KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:107: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val kinesisClient = new AmazonKinesisClient(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:108: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     kinesisClient.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:223: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val kinesisClient = new AmazonKinesisClient(new DefaultAWSCredentialsProviderChain())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:224: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     kinesisClient.setEndpoint(endpoint)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/SparkAWSCredentials.scala:76: method withLongLivedCredentialsProvider in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       .withLongLivedCredentialsProvider(longLivedCreds.provider)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:58: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val client = new AmazonKinesisClient(KinesisTestUtils.getAWSCredentials())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:59: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     client.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:64: constructor AmazonDynamoDBClient in class AmazonDynamoDBClient is deprecated: see corresponding Javadoc for more information.
[warn]     val dynamoDBClient = new AmazonDynamoDBClient(new DefaultAWSCredentialsProviderChain())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:65: method setRegion in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     dynamoDBClient.setRegion(RegionUtils.getRegion(regionName))
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisReceiver.scala:187: constructor Worker in class Worker is deprecated: see corresponding Javadoc for more information.
[warn]     worker = new Worker(recordProcessorFactory, kinesisClientLibConfiguration)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Checking every *.class/*.jar file's SHA-1.
[warn] Strategy 'discard' was applied to 2 files
[warn] Strategy 'filterDistinctLines' was applied to 8 files
[warn] Strategy 'first' was applied to 50 files
[info] SHA-1: decadc3b7cbfe3a0f521dcfe7a7b4a43703169d6
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl-assembly/target/scala-2.11/spark-streaming-kinesis-asl-assembly-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[success] Total time: 38 s, completed Jul 11, 2020 6:50:41 AM

========================================================================
Detecting binary incompatibilities with MiMa
========================================================================
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.rules.RuleExecutor.Strategy
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NamedQueryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.InBlock
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.TrainValidationSplit.TrainValidationSplitReader
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.LocalLDAModel.SaveLoadV1_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.Main.MainClassOptionParser
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.util.DefaultParamsReader.Metadata
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.OpenHashMapBasedStateMap.StateInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.rpc.netty.RpcEndpointVerifier.CheckExistence
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SetupDriver
Error instrumenting class:org.apache.spark.mapred.SparkHadoopMapRedUtil$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.MultilayerPerceptronClassifierWrapper.MultilayerPerceptronClassifierWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.StringIndexModelWriter.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherProtocol.Hello
[WARN] Unable to detect inner functions for class:org.apache.spark.security.CryptoStreamUtils.CryptoHelperChannel
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ImputerModel.ImputerModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IdentifierCommentListContext
[WARN] Unable to detect inner functions for class:org.apache.spark.util.SignalUtils.ActionHandler
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IntervalContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QuerySpecificationContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator17$1
[WARN] Unable to detect inner functions for class:org.apache.spark.util.Benchmark.Result
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.plans
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.DateAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.OneHotEncoderModel.OneHotEncoderModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.ImplicitTypeCasts
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveRdd
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleTableIdentifierContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.ZooKeeperLeaderElectionAgent.LeadershipStatus
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.UnsetTablePropertiesContext
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillTask
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AggregationContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LogisticRegressionWrapper.LogisticRegressionWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IntervalValueContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.FeatureHasher.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel.MultilayerPerceptronClassificationModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.RunLengthEncoding.Decoder
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LogisticRegressionModel.LogisticRegressionModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterChanged
[WARN] Unable to detect inner functions for class:org.apache.spark.rdd.DefaultPartitionCoalescer.PartitionLocations
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.StateStoreAwareZipPartitionsHelper
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.GeneralizedLinearRegressionModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.SparkAppHandle.Listener
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.debug.DebugExec
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LDAWrapper.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.api.python.SerDeUtil.ArrayConstructor
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestWorkerState
Error instrumenting class:org.apache.spark.sql.execution.streaming.HDFSMetadataLog$FileSystemManager
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.KVTypeInfo.Accessor
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.UnsafeSorterSpillMerger.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisteredWorker
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterApplication
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FromClauseContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateKeyWatermarkPredicate
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ColumnReferenceContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterChangeAcknowledged
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QueryTermDefaultContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ComparisonContext
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetMatchingBlockIds
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutors
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.PythonMLLibAPI.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ApplicationRemoved
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyAndNumValues
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QueryTermContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0.Data
Error instrumenting class:org.apache.spark.input.StreamInputFormat
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.CaseWhenCoercion
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.RetryingBlockFetcher.BlockFetchStarter
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RetrieveSparkAppConfig
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ProbabilisticClassificationModel.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator16$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.BisectingKMeansModel.$$typecreator1$1
Error instrumenting class:org.apache.spark.mllib.regression.IsotonicRegressionModel$SaveLoadV1_0$
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.PrefixSpan.Prefix
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyToNumValuesType
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.InMemoryFileIndex.SerializableBlockLocation
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.stat.test.ChiSqTest.Method
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ExtractWindowExpressions
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator8$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.rules.RuleExecutor.Batch
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.ChiSquareTest.ChiSquareResult
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.LevelDB.PrefixCache
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AliasedRelationContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.PrefixSpan.FreqSequence
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.DecisionTreeRegressionModel.DecisionTreeRegressionModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.RelationalGroupedDataset.CubeType
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator9$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveGroupingAnalytics
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Error instrumenting class:org.apache.spark.deploy.SparkSubmit$
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.NodeData
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LogisticRegressionModel.LogisticRegressionModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.Pipeline.PipelineReader
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.ArrayWrappers.ComparableObjectArray
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.Division
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator7$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ClassificationModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.KMeansModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.scheduler.ReceiverTracker.TrackerState
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.streaming.StreamingQueryListener.QueryTerminatedEvent
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ParenthesizedExpressionContext
Error instrumenting class:org.apache.spark.mllib.clustering.LocalLDAModel$SaveLoadV1_0$
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableProviderContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.ResolveHints.ResolveBroadcastHints
Error instrumenting class:org.apache.spark.sql.execution.command.DDLUtils$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.DecisionTreeRegressorWrapper.DecisionTreeRegressorWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.UnregisterApplication
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SampleByBytesContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterWorkerFailed
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.SplitData
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LDAWrapper.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.optimizer.JoinReorderDP.JoinPlan
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.expressions
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateFunctionContext
Error instrumenting class:org.apache.spark.sql.execution.streaming.CommitLog
[WARN] Unable to detect inner functions for class:org.apache.spark.network.client.TransportClient.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LDAWrapper.LDAWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.LocationSpecContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.PartitioningUtils.PartitionValues
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper.GeneralizedLinearRegressionWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.IDFModelWriter.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.DriverStatusResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.optimization.LBFGS.CostFun
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.OneHotEncoderModel.OneHotEncoderModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TablePropertyKeyContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.param.shared.SharedParamsCodeGen.ParamDesc
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.GaussianMixtureModel.SaveLoadV1_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.HDFSBackedStateStore.COMMITTED
[WARN] Unable to detect inner functions for class:org.apache.spark.network.client.TransportClientFactory.ClientPool
Error instrumenting class:org.apache.spark.scheduler.SplitInfo$
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.SparkBuildInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AddTablePartitionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SmallIntLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.python.MLSerDe.SparseMatrixPickler
Error instrumenting class:org.apache.spark.api.python.DoubleArrayWritable
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.TrainValidationSplitModel.TrainValidationSplitModelReader
Error instrumenting class:org.apache.spark.sql.execution.datasources.orc.OrcUtils$
[WARN] Unable to detect inner functions for class:org.apache.spark.api.java.JavaUtils.SerializableMapWrapper
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyToNumValuesStore
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.DecisionTreeRegressorWrapper.DecisionTreeRegressorWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.AnalysisErrorAt
[WARN] Unable to detect inner functions for class:org.apache.spark.ui.JettyUtils.ServletParams
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.rules.RuleExecutor.Once
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RepairTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.EnsembleModelReadWrite.EnsembleNodeData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.DataSource.SourceInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.SparkSubmitUtils.MavenCoordinate
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.aggregate.ApproximatePercentile.PercentileDigest
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NamedExpressionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RowConstructorContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.OptimizeMetadataOnlyQuery.PartitionedRelation
Error instrumenting class:org.apache.spark.deploy.SparkHadoopUtil$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorSizeHint.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.AssociationRules.Rule
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette.ClusterStats
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.RepeatedGroupConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeans.ClusterSummaryAggregator
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.CheckpointWriter.CheckpointWriteHandler
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.WindowDefContext
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalSorter.SpillReader
[WARN] Unable to detect inner functions for class:org.apache.spark.rpc.netty.Dispatcher.EndpointData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveNewInstance
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.fpm.FPGrowthModel.FPGrowthModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.DateConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.optim.WeightedLeastSquares.Aggregator
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalSorter.IteratorForPartition
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.MaxAbsScalerModelWriter.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.MapConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetDatabasePropertiesContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetQuantifierContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.MemorySink.AddedData
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.client.StandaloneAppClient.ClientEndpoint
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveShuffle
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CtesContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.plans.DslLogicalPlan
[WARN] Unable to detect inner functions for class:org.apache.spark.annotation.InterfaceStability.Evolving
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GBTRegressorWrapper.GBTRegressorWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.KMeansWrapper.KMeansWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.PowerIterationClusteringModel.SaveLoadV1_0.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.scheduler.ReceiverTracker.ReceiverTrackerEndpoint
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.BisectingKMeansWrapper.BisectingKMeansWrapperWriter
Error instrumenting class:org.apache.spark.sql.execution.streaming.HDFSMetadataLog$FileContextManager
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.IdentityProjection
Error instrumenting class:org.apache.spark.internal.io.HadoopMapRedCommitProtocol
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.UnsafeInMemorySorter.$SortedIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.MultiInsertQueryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.PCAModelWriter.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.fpm.FPGrowthModel.FPGrowthModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel.MultilayerPerceptronClassificationModelWriter.Data
Error instrumenting class:org.apache.spark.sql.execution.streaming.OffsetSeqLog
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.ChiSquareTest.$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SubqueryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.WidenSetOperationTypes
[WARN] Unable to detect inner functions for class:org.apache.spark.network.crypto.TransportCipher.EncryptionHandler
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LDAModel.$$typecreator2$1
Error instrumenting class:org.apache.spark.internal.io.HadoopMapReduceWriteConfigUtil
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GBTRegressionModel.GBTRegressionModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LogisticRegressionWrapper.LogisticRegressionWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.DateTimeOperations
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SetupDriver
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetMatchingBlockIds
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyWithIndexAndValue
[WARN] Unable to detect inner functions for class:org.apache.spark.InternalAccumulator.output
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.CatalystTypeConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.SparkConf.DeprecatedConfig
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StrictIdentifierContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ComparisonOperatorContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.EltCoercion
[WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SparkSaslClient.$ClientCallbackHandler
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.DriverStatusResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutorResponse
Error instrumenting class:org.apache.spark.launcher.InProcessLauncher
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.BlacklistTracker.ExecutorFailureList
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.ResolveTableValuedFunctions.ArgumentList
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.receiver.BlockGenerator.Block
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SparkAppConfig
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator7$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.sources.MemorySinkV2.AddedData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.BigIntLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.network.server.TransportServer.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.Count
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RetrieveLastAllocatedExecutorId
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StopWordsRemover.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.SignedPrefixComparatorDesc
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator11$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.LongDelta.Encoder
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.GaussianMixtureModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.FloatConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.BinaryPrefixComparator
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.CountVectorizerModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.rdd.InputFileBlockHolder.FileBlock
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FailNativeCommandContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ExpressionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCheckResult.TypeCheckFailure
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleTableSchemaContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.optimizer.StarSchemaDetection.TableAccessCardinality
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator12$1
[WARN] Unable to detect inner functions for class:org.apache.spark.util.Benchmark.Timer
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.StringIndexModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.optim.WeightedLeastSquares.Cholesky
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LDAWrapper.LDAWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PredicateOperatorContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.stat.test.KolmogorovSmirnovTest.NullHypothesis
Error instrumenting class:org.apache.spark.deploy.master.ui.MasterWebUI
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.RelationalGroupedDataset.GroupType
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.ArrayConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Family
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.history.AppListingListener.MutableAttemptInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.BlockManagerHeartbeat
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.DecisionTreeClassificationModel.DecisionTreeClassificationModelWriter.$$typecreator1$1
Error instrumenting class:org.apache.spark.sql.execution.datasources.DataSource$
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StopExecutors
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillDriver
Error instrumenting class:org.apache.spark.mllib.clustering.GaussianMixtureModel$SaveLoadV1_0$
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.GaussianMixtureModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.RelationalGroupedDataset.RollupType
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DropTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.mesos.MesosExternalShuffleClient.1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableValuedFunctionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.IntDelta.Decoder
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.StopBlockManagerMaster
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.KMeansModelReader.$$typecreator7$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.ALSModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.OneForOneBlockFetcher.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.aggregate.TypedAverage.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeFixedWidthAggregationMap.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.Min
Error instrumenting class:org.apache.spark.sql.execution.streaming.state.StateStoreProvider$
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.StorageStatus.NonRddStorageInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ColTypeContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.OrderedIdentifierContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.FileFormatWriter.EmptyDirectoryWriteTask
20/07/11 06:51:57 WARN BLAS: Failed to load implementation from: com.github.fommil.netlib.NativeSystemBLAS
20/07/11 06:51:57 WARN BLAS: Failed to load implementation from: com.github.fommil.netlib.NativeRefBLAS
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.FileFormatWriter.OutputSpec
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.RandomForestClassificationModel.RandomForestClassificationModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerLatestState
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAttributeRewriter.VectorAttributeRewriterWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.linalg.distributed.RowMatrix.$SVDMode$2$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeExternalRowSorter.PrefixComputer
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.OneHotEncoderModel.OneHotEncoderModelWriter.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.PromoteStrings
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.StreamingDeduplicationStrategy
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.ArrayWrappers.ComparableLongArray
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InsertIntoContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.RandomForestRegressionModel.RandomForestRegressionModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetLocations
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LinearSVCModel.LinearSVCWriter.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.Rating
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NumericLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PartitionValContext
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.BlockManagerHeartbeat
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QueryPrimaryDefaultContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LinearSVCModel.LinearSVCWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Tweedie
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel.BucketedRandomProjectionLSHModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.UnsignedPrefixComparator
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorSizeHint.$$typecreator3$1
Error instrumenting class:org.apache.spark.sql.execution.datasources.TextBasedFileFormat
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowDatabasesContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InlineTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.GBTClassificationModel.GBTClassificationModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager
[WARN] Unable to detect inner functions for class:org.apache.spark.network.util.TransportFrameDecoder.Interceptor
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerRemoved
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.FixedLengthRowBasedKeyValueBatch.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Bucketizer.BucketizerWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDe.LabeledPointPickler
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.CoalesceExec.EmptyPartition
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Index
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.SuccessFetchResult
[WARN] Unable to detect inner functions for class:org.apache.spark.network.util.LevelDBProvider.LevelDBLogger
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.ToBlockManagerSlave
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.CrossValidatorModel.CrossValidatorModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetArrayConverter.ElementConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleInsertQueryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.SuccessFetchResult
[WARN] Unable to detect inner functions for class:org.apache.spark.shuffle.sort.ShuffleInMemorySorter.1
[WARN] Unable to detect inner functions for class:org.apache.spark.network.client.TransportClientFactory.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.errors.TreeNodeException
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.PassThrough.Encoder
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator9$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionModel.IsotonicRegressionModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RefreshTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.RandomForestClassifierWrapper.RandomForestClassifierWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.DecisionTreeClassificationModel.DecisionTreeClassificationModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Index
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator7$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.StringIndexModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.StateStoreType
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.recommendation.MatrixFactorizationModel.SaveLoadV1_0.$typecreator13$1
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.SparkAppHandle.State
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.aggregate.TypedAverage.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.UnsignedPrefixComparatorDescNullsFirst
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutorsOnHost
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.MapConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.CTESubstitution
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TypeConstructorContext
Error instrumenting class:org.apache.spark.SSLOptions
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestDriverStatus
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.streaming.InternalOutputModes.Append
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.orc.OrcDeserializer.ArrayDataUpdater
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.SparkSubmitCommandBuilder.1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CastContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.RelationalGroupedDataset.PivotType
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.ALSModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LocalLDAModel.LocalLDAModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableAliasContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestExecutors
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Logit
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ImplicitOperators
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorSlicer.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StopExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.AFTSurvivalRegressionModelReader
Error instrumenting class:org.apache.spark.sql.execution.streaming.HDFSMetadataLog$FileManager
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.CatalogImpl.$$typecreator2$1
Error instrumenting class:org.apache.spark.input.WholeTextFileInputFormat
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveRdd
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.BooleanExpressionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveMissingReferences
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.UnquotedIdentifierContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.PrimitiveConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveNaturalAndUsingJoin
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.LaunchDriver
Error instrumenting class:org.apache.spark.deploy.history.HistoryServer
Error instrumenting class:org.apache.spark.sql.execution.streaming.ManifestFileCommitProtocol
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.CountVectorizerModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.Word2VecModel.SaveLoadV1_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateValueWatermarkPredicate
Error instrumenting class:org.apache.spark.api.python.TestOutputKeyConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.optimization.NNLS.Workspace
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DataTypeContext
[WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SparkSaslServer.1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QuotedIdentifierContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.Word2VecModel.SaveLoadV1_0
Error instrumenting class:org.apache.spark.api.python.TestWritable
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.BisectingKMeansModel.BisectingKMeansModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SparkAppConfig
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ProbabilisticClassificationModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.WriteStyle.RawStyle
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.SendHeartbeat
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerRemoved
Error instrumenting class:org.apache.spark.deploy.FaultToleranceTest$delayedInit$body
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LogisticRegressionModel.LogisticRegressionModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.Decimal.DecimalAsIfIntegral
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeMax
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.LeftSide
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.optimizer.Optimizer.OptimizeSubqueries
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.stat.StatFunctions.CovarianceCounter
[WARN] Unable to detect inner functions for class:org.apache.spark.annotation.InterfaceStability.Unstable
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RecoverPartitionsContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.MaxAbsScalerModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.SQLConf.ParquetOutputTimestampType
Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.ParquetReadSupport
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.OpenHashSet.Hasher
Error instrumenting class:org.apache.spark.input.StreamBasedRecordReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.executor.Executor.TaskReaper
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.HDFSBackedStateStore.STATE
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.LocalIndexEncoder
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator6$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.Pipeline.SharedReadWrite
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.StandardScalerModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.SQLConf.Replaced
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.StringType.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.InMemoryFileIndex.SerializableFileStatus
[WARN] Unable to detect inner functions for class:org.apache.spark.network.server.RpcHandler.1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.CrossValidator.CrossValidatorWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetMapConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.PCAModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.rules.RuleExecutor.FixedPoint
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterInStandby
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.ProgressReporter.ExecutionStats
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.catalog.InMemoryCatalog.DatabaseDesc
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.ChainedIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillDriverResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.SizeTracker.Sample
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StopWordsRemover.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV1_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.FloatType.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator18$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.python.MLSerDe.SparseVectorPickler
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetLocationsAndStatus
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DereferenceContext
Error instrumenting class:org.apache.spark.sql.execution.datasources.csv.MultiLineCSVDataSource$
Error instrumenting class:org.apache.spark.deploy.security.HBaseDelegationTokenProvider
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveBlock
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.DriverEndpoint
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.kafka010.CachedKafkaConsumer.CacheKey
[WARN] Unable to detect inner functions for class:org.apache.spark.internal.io.FileCommitProtocol.TaskCommitMessage
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetExecutorEndpointRef
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Link
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IntegerLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.GeneralizedLinearRegressionModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.LookupFunctions
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.BooleanAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.streaming.InternalOutputModes.Complete
Error instrumenting class:org.apache.spark.input.StreamFileInputFormat
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator10$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AnalyzeContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.ChiSquareTest.ChiSquareResult
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.TimestampConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowFunctionsContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.DoubleAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StructContext
[WARN] Unable to detect inner functions for class:org.apache.spark.MapOutputTrackerMaster.MessageLoop
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateDatabaseContext
Error instrumenting class:org.apache.spark.ml.image.SamplePathFilter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PredicatedContext
[WARN] Unable to detect inner functions for class:org.apache.spark.network.server.RpcHandler.OneWayRpcCallback
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RelationPrimaryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.rdd.NewHadoopRDD.NewHadoopMapPartitionsWithSplitRDD
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InsertOverwriteDirContext
Error instrumenting class:org.apache.spark.input.FixedLengthBinaryInputFormat
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.scheduler.StreamingListenerBus.WrappedStreamingListenerEvent
Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase$NullIntIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel.BucketedRandomProjectionLSHModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SortItemContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveSubquery
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.attribute.AttributeType.Numeric$2$
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.TreeEnsembleModel.SaveLoadV1_0.EnsembleNodeData
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.Heartbeat
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LinearSVCModel.LinearSVCWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.dstream.ReceiverInputDStream.ReceiverRateController
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.GaussianMixtureModel.SaveLoadV1_0.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Key
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.HashingTF.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.rdd.HadoopRDD.HadoopMapPartitionsWithSplitRDD
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Word2VecModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.graphx.util.BytecodeUtils.MethodInvocationFinder
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.TreeEnsembleModel.SaveLoadV1_0.$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.$$typecreator30$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.BooleanEquality
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.ByteConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.VectorIndexerModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SparkSaslServer.$DigestCallbackHandler
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.2
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinExec.OneSideHashJoiner
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.OpenHashSet.IntHasher
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.HDFSBackedStateStore.ABORTED
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.ByteType.$$typecreator1$1
Error instrumenting class:org.apache.spark.sql.execution.datasources.PartitioningUtils$
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillDriver
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.trees.TreeNodeRef
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.MutableProjection
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveDeserializer
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.UpdateDelegationTokens
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.IDFModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.network.protocol.Encoders.ByteArrays
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDe.SparseMatrixPickler
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PartitionSpecContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.PCAModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.FetchRequest
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.SummarizerBuffer
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.UncacheTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.plans.logical.statsEstimation.EstimationUtils.OverlappedRange
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions.DslSymbol
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.Max
20/07/11 06:51:59 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.StateStoreAwareZipPartitionsRDD
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.StreamingJoinStrategy
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManager.ShuffleMetricsSource
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeansModel.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ComplexDataTypeContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.InMemoryScans
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.FixNullability
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.aggregate.ApproxCountDistinctForIntervals.LongArrayInternalRow
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.ALSWrapper.ALSWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ValueExpressionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.Schema
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QuotedIdentifierAlternativeContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.FunctionArgumentConversion
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator10$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterStateResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.InMemoryFileIndex.SerializableFileStatus
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.StructAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.RFormulaModel.RFormulaModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.EnsembleModelReadWrite.$typecreator6$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ProbabilisticClassificationModel.$$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.Aggregation
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.sources.MemorySinkV2.AddedData
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetMemoryStatus
Error instrumenting class:org.apache.spark.ml.source.libsvm.LibSVMFileFormat
[WARN] Unable to detect inner functions for class:org.apache.spark.util.Benchmark.Case
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.AttributeSeq
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.$$typecreator2$1
Error instrumenting class:org.apache.spark.mllib.tree.model.TreeEnsembleModel$SaveLoadV1_0$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator10$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.HDFSBackedStateStore.UPDATING
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.WindowsContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.CrossValidator.CrossValidatorReader
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.recommendation.MatrixFactorizationModel.SaveLoadV1_0.$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestExecutors
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetLocations
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.UnsignedPrefixComparatorDesc
[WARN] Unable to detect inner functions for class:org.apache.spark.ui.JettyUtils.ServletParams
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveAggAliasInGroupBy
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.QuantileDiscretizer.QuantileDiscretizerWriter
Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$FileTypes
Error instrumenting class:org.apache.spark.deploy.rest.RestSubmissionServer
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LinearSVCModel.LinearSVCWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ProbabilisticClassificationModel.$$typecreator6$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDeBase.BasePickler
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Binarizer.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.KMeansModel.Cluster
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.KMeansModel.Cluster
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinHashLSHModel.MinHashLSHModelWriter.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.SpecialLimits
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeM2n
[WARN] Unable to detect inner functions for class:org.apache.spark.util.Benchmark.Case
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.SQLConf.PartitionOverwriteMode
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LinearSVCWrapper.LinearSVCWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.SparkConf.DeprecatedConfig
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterChanged
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.NaiveBayesWrapper.NaiveBayesWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.RandomForestClassifierWrapper.RandomForestClassifierWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.NaiveBayesModel.NaiveBayesModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.SubmitDriverResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.MaxAbsScalerModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.StringIndexModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.Schema
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.ArrowVectorAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel.MultilayerPerceptronClassificationModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GBTRegressionModel.GBTRegressionModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NamedWindowContext
Error instrumenting class:org.apache.spark.sql.execution.command.CommandUtils$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.IdentityConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CacheTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.kafka010.CachedKafkaConsumer.CacheKey
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisteredExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.Shutdown
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.ExternalShuffleBlockHandler.1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.streaming.InternalOutputModes.Update
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArray.SpillableArrayIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetMapConverter.$KeyValueConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.network.protocol.Encoders.StringArrays
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GaussianMixtureWrapper.GaussianMixtureWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.aggregate.TypedAverage.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.OneHotEncoderModel.OneHotEncoderModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.DecisionTreeClassifierWrapper.DecisionTreeClassifierWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManager.RemoteBlockTempFileManager
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StorageHandlerContext
Error instrumenting class:org.apache.spark.input.Configurable
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.ChiSqSelectorModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.InMemoryStore.1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LDAWrapper.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.DataType.JSortedObject
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.Decimal.DecimalIsFractional
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.PowerIterationClustering.Assignment
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.DecimalType.Expression
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.RatingBlock
Error instrumenting class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.OneForOneBlockFetcher.$ChunkCallback
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DropFunctionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FrameBoundContext
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.SignedPrefixComparator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DescribeDatabaseContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.NodeData
[WARN] Unable to detect inner functions for class:org.apache.spark.SparkConf.AlternateConfig
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.LinearRegressionModel.LinearRegressionModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisteredApplication
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.$typecreator6$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.TrainValidationSplit.TrainValidationSplitWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.UnsignedPrefixComparatorNullsLast
Error instrumenting class:org.apache.spark.sql.execution.datasources.csv.TextInputCSVDataSource$
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.$typecreator8$1
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.ExternalShuffleBlockResolver.$2
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.VertexData
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ExecutorAdded
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateViewContext
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetBlockStatus
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveFunctions
[WARN] Unable to detect inner functions for class:org.apache.spark.shuffle.sort.ShuffleInMemorySorter.SortComparator
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.StatefulAggregationStrategy
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.FPGrowthWrapper.FPGrowthWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableIdentifierContext
Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$FileTypes$
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDe.RatingPickler
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LDAModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.SplitData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetOperationContext
[WARN] Unable to detect inner functions for class:org.apache.spark.network.protocol.MessageDecoder.1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.LocalLDAModel.SaveLoadV1_0.Data$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GaussianMixtureWrapper.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.StateStore.MaintenanceTask
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerLatestState
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DescribeColNameContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.JoinTypeContext
Error instrumenting class:org.apache.spark.sql.execution.datasources.orc.OrcFileFormat
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.DecisionTreeRegressionModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.joins.BuildSide
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRestModel.OneVsRestModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.unsafe.map.BytesToBytesMap.1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ExecutorAdded
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.ALSWrapper.ALSWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestSubmitDriver
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.KMeansModelWriter.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ArithmeticBinaryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.CatalogImpl.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableFileFormatContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.PredictData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.joins.BuildRight
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.InConversion
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ColumnPruner.ColumnPrunerWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ExplainContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV1_0.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.WriteStyle.FlattenStyle
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV1_0.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.GeneralizedLinearRegressionModelWriter.Data
Error instrumenting class:org.apache.spark.ui.JettyUtils$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.UseContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeKVExternalSorter.$KVSorterIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalSorter.SpillableIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherServer.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator6$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SparkSession.implicits
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveOrdinalInOrderByAndGroupBy
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.ChiSqSelectorModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.attribute.AttributeType.Binary$2$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator2$1
Error instrumenting class:org.apache.spark.input.FixedLengthBinaryRecordReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.VectorIndexerModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.Pipeline.PipelineWriter
Error instrumenting class:org.apache.spark.internal.io.HadoopMapRedWriteConfigUtil
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator8$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QueryPrimaryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.StopAppClient
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.PowerIterationClusteringModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.LongAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.CoalesceExec.EmptyPartition
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.NaiveBayesWrapper.NaiveBayesWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Identity
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.QueryExecution.debug
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.debug.DebugExec
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.GroupingSetContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.ShortAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.RatingBlock
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.MetricsAggregate
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.EnsembleModelReadWrite.$typecreator10$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QueryNoWithContext
Error instrumenting class:org.apache.spark.sql.execution.datasources.PartitionPath$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ResourceContext
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetPeers
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.LevelDBTypeInfo.$Index
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.IsotonicRegressionModel.SaveLoadV1_0.Data$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.StructConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ConstantContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.KMeansModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions.DslExpression
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerSchedulerStateResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.ArrayAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Subscript
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveReferences
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.CoalesceExec.EmptyRDDWithPartitions
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.SignedPrefixComparatorDescNullsFirst
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.VectorIndexerModelWriter.Data
Error instrumenting class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$StoreFile$
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ExecutorStateChanged
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PredicateContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.MinMaxScalerModelWriter.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinHashLSHModel.MinHashLSHModelWriter.Data
Error instrumenting class:org.apache.spark.sql.catalyst.parser.ParserUtils$EnhancedLogicalPlan$
[WARN] Unable to detect inner functions for class:org.apache.spark.shuffle.sort.ShuffleInMemorySorter.ShuffleSorterIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalAppendOnlyMap.HashComparator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.ConcatCoercion
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestKillDriver
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestSubmitDriver
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.optimizer.JoinReorderDP.JoinPlan
Error instrumenting class:org.apache.spark.deploy.worker.ui.WorkerWebUI
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.receiver.BlockGenerator.GeneratorState
[WARN] Unable to detect inner functions for class:org.apache.spark.InternalAccumulator.shuffleWrite
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.python.MLSerDe.DenseMatrixPickler
Error instrumenting class:org.apache.spark.metrics.MetricsSystem
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDe.DenseVectorPickler
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.IDFModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.LaunchExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.util.DefaultParamsReader.Metadata
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.GeneralizedLinearRegressionModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0.$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Interaction.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.xml.UDFXPathUtil.ReusableStringReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.AFTSurvivalRegressionModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StringLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeansModel.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.BoundPortsResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions.ImplicitAttribute
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel.BucketedRandomProjectionLSHModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.util.Utils.Lock
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.debug.DebugExec.$ColumnMetrics$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.BisectingKMeansModel.BisectingKMeansModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.BooleanValueContext
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.mesos.MesosExternalShuffleClient.$RegisterDriverCallback
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.streaming.StreamingQueryListener.QueryStartedEvent
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.VertexData
[WARN] Unable to detect inner functions for class:org.apache.spark.util.JsonProtocol.TASK_END_REASON_FORMATTED_CLASS_NAMES
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.FlatMapGroupsWithStateStrategy
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.EltCoercion
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.protocol.BlockTransferMessage.Type
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InsertOverwriteTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ExtractGenerator
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.CompleteRecovery
[WARN] Unable to detect inner functions for class:org.apache.spark.graphx.impl.ShippableVertexPartition.ShippableVertexPartitionOpsConstructor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.$$typecreator22$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterWorker
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.FPGrowthWrapper.FPGrowthWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.StandardScalerModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.EquivalentExpressions.Expr
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.OpenHashMapBasedStateMap.LimitMarker
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SampleByPercentileContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.StringIndexerModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ComplexColTypeListContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.$$typecreator2$1
Error instrumenting class:org.apache.spark.sql.execution.datasources.json.JsonFileFormat
[WARN] Unable to detect inner functions for class:org.apache.spark.status.ElementTrackingStore.Trigger
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveSubqueryColumnAliases
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.FetchResult
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.BinaryType.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Gaussian
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowColumnsContext
[WARN] Unable to detect inner functions for class:org.apache.spark.network.util.LevelDBProvider.StoreVersion
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.UncompressedInBlockSort
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LDA.LDAReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAssembler.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.EquivalentExpressions.Expr
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.VectorIndexerModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Log
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ApplicationFinished
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.ResolveHints.RemoveAllHints
Error instrumenting class:org.apache.spark.sql.execution.streaming.FileStreamSinkLog
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.Word2VecModel.SaveLoadV1_0.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Tweedie
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalAppendOnlyMap.ExternalIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.LogicalNotContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.StandardScalerModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.util.SizeEstimator.ClassInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.LaunchTask
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.RepeatedConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.HasCachedBlocks
[WARN] Unable to detect inner functions for class:org.apache.spark.graphx.PartitionStrategy.RandomVertexCut
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.HintContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.StandardScalerModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.scheduler.StreamingListenerBus.WrappedStreamingListenerEvent
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.PowerIterationClustering.Assignment
[WARN] Unable to detect inner functions for class:org.apache.spark.unsafe.types.UTF8String.IntWrapper
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Word2VecModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterClusterManager
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GBTClassifierWrapper.GBTClassifierWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.network.server.OneForOneStreamManager.StreamState
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.RelationalGroupedDataset.GroupByType
[WARN] Unable to detect inner functions for class:org.apache.spark.unsafe.types.UTF8String.LongWrapper
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FileFormatContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionModel.IsotonicRegressionModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.HasCachedBlocks
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.StreamingRelationStrategy
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.ParserUtils.EnhancedLogicalPlan
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetLocationsMultipleBlockIds
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterWorkerFailed
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.IsotonicRegressionModel.SaveLoadV1_0.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IdentifierListContext
Error instrumenting class:org.apache.spark.mllib.clustering.DistributedLDAModel$SaveLoadV1_0$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.InMemoryFileIndex.SerializableBlockLocation
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyWithIndexToValueStore
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ColTypeListContext
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.ArrayWrappers.ComparableIntArray
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateKeyWatermarkPredicate
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Named
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SortPrefixUtils.NoOpPrefixComparator
Error instrumenting class:org.apache.spark.deploy.security.HadoopFSDelegationTokenProvider
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAttributeRewriter.VectorAttributeRewriterReader
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.CheckForWorkerTimeOut
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetDecimalConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.KVTypeInfo.$MethodAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.ReviveOffers
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.BooleanBitSet.Decoder
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterWorker
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetLongDictionaryAwareDecimalConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.OutputCommitCoordinator.OutputCommitCoordinatorEndpoint
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.TreeEnsembleModel.SaveLoadV1_0.EnsembleNodeData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.DoubleConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.LeastSquaresNESolver
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.MinMaxScalerModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.MetricsAggregate
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.FileBasedWriteAheadLog.LogInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.ExecutorAllocationManager.ExecutorAllocationListener
Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat
Error instrumenting class:org.apache.spark.metrics.sink.MetricsServlet
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TablePropertyListContext
[WARN] Unable to detect inner functions for class:org.apache.spark.util.JsonProtocol.SPARK_LISTENER_EVENT_FORMATTED_CLASS_NAMES
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StopWordsRemover.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.serializer.SerializationDebugger.ListObjectOutputStream
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ProbabilisticClassificationModel.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherProtocol.Message
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestDriverStatus
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator11$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.NumNonZeros
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.NullIntolerant
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.RelationalGroupedDataset.PivotType
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.MinMaxScalerModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.CrossValidatorModel.CrossValidatorModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.DiskBlockObjectWriter.$ManualCloseBufferedOutputStream$1
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.Main.1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.IntAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.VariableLengthRowBasedKeyValueBatch.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.PythonMLLibAPI.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.IntegerType.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.BooleanBitSet.Encoder
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Poisson
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.DecisionTreeRegressionModel.DecisionTreeRegressionModelWriter.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.optim.WeightedLeastSquares.QuasiNewton
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeExternalRowSorter.PrefixComputer.Prefix
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.KMeansWrapper.KMeansWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.StringAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.LinearRegressionModel.LinearRegressionModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.CatalogImpl.$$typecreator3$1
Error instrumenting class:org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$
Error instrumenting class:org.apache.spark.sql.execution.streaming.state.StateStore$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.NaiveBayesModel.NaiveBayesModelWriter
Error instrumenting class:org.apache.spark.sql.execution.streaming.HDFSMetadataLog
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.LaunchExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.RetryingBlockFetcher.1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RelationContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillExecutors
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionModel.IsotonicRegressionModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator9$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRestModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Probit
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SampleMethodContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.SubmitDriverResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.BinaryAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PartitionSpecLocationContext
Error instrumenting class:org.apache.spark.sql.execution.streaming.StreamMetadata$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.IntDelta.Encoder
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LDAModel.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.LocalPrefixSpan.ReversedPrefix
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IdentifierCommentContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.RightSide
Error instrumenting class:org.apache.spark.ui.ServerInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StopWordsRemover.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionModel.IsotonicRegressionModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.FileBasedWriteAheadLog.LogInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveWorker
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IdentifierContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.RFormulaModel.RFormulaModelWriter.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.TriggerThreadDump
[WARN] Unable to detect inner functions for class:org.apache.spark.network.protocol.Encoders.Strings
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.FileStreamSource.FileEntry
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.WindowsSubstitution
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalAppendOnlyMap.DiskMapIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.GradientBoostedTreesModel.SaveLoadV1_0
Error instrumenting class:org.apache.spark.sql.execution.datasources.NoopCache$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.ChiSquareTest.$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeExternalRowSorter.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutorsOnHost
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.protocol.BlockTransferMessage.Decoder
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.CholeskySolver
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper.GeneralizedLinearRegressionWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StopDriver
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.BlockLocationsAndStatus
Error instrumenting class:org.apache.spark.sql.execution.datasources.FileFormatWriter$DynamicPartitionWriteTask
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.JoinSelection
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.PrefixSpan.Postfix
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.BucketSpecContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.DriverStateChanged
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinConditionSplitPredicates
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeKVExternalSorter.1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.IDF.DocumentFrequencyAggregator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DoubleLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.GaussianMixtureModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutorFailed
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette.$typecreator1$1
Error instrumenting class:org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LogisticRegressionModel.LogisticRegressionModelWriter.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.serializer.SerializationDebugger.SerializationDebugger
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.receiver.BlockGenerator.Block
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.CountVectorizerModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.FailureFetchResult
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FunctionCallContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorSlicer.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeM2
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.MemorySink.AddedData
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.ReplicateBlock
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillExecutors
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SubqueryExpressionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassReflection
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.debug.DebugQuery
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ImputerModel.ImputerReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateWatermarkPredicates
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.DecimalType.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveShuffle
[WARN] Unable to detect inner functions for class:org.apache.spark.network.crypto.TransportCipher.EncryptedMessage
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.HashingTF.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.LongDelta.Decoder
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.optimizer.StarSchemaDetection.TableAccessCardinality
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.DirectKafkaRateController
[WARN] Unable to detect inner functions for class:org.apache.spark.unsafe.map.BytesToBytesMap.$Location
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase.IntIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.$$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.BisectingKMeansWrapper.BisectingKMeansWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.RowUpdater
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LocalLDAModel.LocalLDAModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel.MultilayerPerceptronClassificationModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ConstantDefaultContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DescribeTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.DictionaryEncoding.Decoder
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.ArrayWrappers.ComparableByteArray
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.InBlock
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.OldData
[WARN] Unable to detect inner functions for class:org.apache.spark.AccumulatorParam.FloatAccumulatorParam
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.EnsembleModelReadWrite.$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.StorageStatus.RddStorageInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.LateralViewContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAttributeRewriter.VectorAttributeRewriterWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ComplexColTypeContext
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.InMemoryStore.InMemoryView
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.RepeatedPrimitiveConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.ShortType.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Binarizer.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.StandardScalerModelWriter.$$typecreator3$1
Error instrumenting class:org.apache.spark.mllib.regression.IsotonicRegressionModel$SaveLoadV1_0$Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Inverse
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.stat.test.ChiSqTest.Method
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ClearCacheContext
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalSorter.SpilledFile
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.python.EvaluatePython.StructTypePickler
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.IDFModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel.BucketedRandomProjectionLSHModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.OneHotEncoder.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Sqrt
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.DirectKafkaInputDStreamCheckpointData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.ArrayConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.plans.logical.statsEstimation.EstimationUtils.OverlappedRange
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.LastContext
[WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SparkSaslClient.1
Error instrumenting class:org.apache.spark.ml.image.SamplePathFilter$
[WARN] Unable to detect inner functions for class:org.apache.spark.graphx.PartitionStrategy.EdgePartition1D
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.ChiSqSelectorModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.UpdateDelegationTokens
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.history.HistoryServerDiskManager.Lease
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.aggregate.HashMapGenerator.Buffer
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.AddWebUIFilter
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ReconnectWorker
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TablePropertyContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerStateResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.SQLConf.Deprecated
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ResetConfigurationContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator13$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IdentifierSeqContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator8$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel.BucketedRandomProjectionLSHModelWriter.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveRelations
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DropDatabaseContext
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.ProgressReporter.ExecutionStats
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.RunLengthEncoding.Encoder
[WARN] Unable to detect inner functions for class:org.apache.spark.graphx.PartitionStrategy.EdgePartition2D
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.OldData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TinyIntLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.StructConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.TimSort.$SortState
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.IsotonicRegressionWrapper.IsotonicRegressionWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ColumnarBatch.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ClassificationModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetBinaryDictionaryAwareDecimalConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.rules.RuleExecutor.FixedPoint
[WARN] Unable to detect inner functions for class:org.apache.spark.rpc.netty.NettyRpcEnv.FileDownloadChannel
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyWithIndexAndValue
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.BooleanConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.FamilyAndLink
[WARN] Unable to detect inner functions for class:org.apache.spark.util.sketch.CountMinSketch.Version
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.debug.DebugExec.$ColumnMetrics
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.VectorizedRleValuesReader.1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.TrainValidationSplitModel.TrainValidationSplitModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.PrefixSpan.Postfix
Error instrumenting class:org.apache.spark.sql.catalyst.util.CompressionCodecs$
[WARN] Unable to detect inner functions for class:org.apache.spark.executor.Executor.TaskRunner
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase.RLEIntIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateTableHeaderContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.streaming.StreamingQueryListener.QueryProgressEvent
[WARN] Unable to detect inner functions for class:org.apache.spark.rdd.HadoopRDD.HadoopMapPartitionsWithSplitRDD
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.WindowSpecContext
[WARN] Unable to detect inner functions for class:org.apache.spark.io.ReadAheadInputStream.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassMethods
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.PCAModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.DoublePrefixComparator
Error instrumenting class:org.apache.spark.sql.execution.datasources.orc.OrcColumnarBatchReader
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ExecutorUpdated
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.LoadDataContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.WriteStyle.QuotedStyle
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.optim.WeightedLeastSquares.Auto
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.LevelDB.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.LinearRegressionModel.LinearRegressionModelWriter.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeNNZ
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.DateType.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.AFTSurvivalRegressionWrapper.AFTSurvivalRegressionWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.WhenClauseContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisteredWorker
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.MaxAbsScalerModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.Heartbeat
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ChangeColumnContext
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.SparkSubmitCommandBuilder.$OptionParser
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator25$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ManageResourceContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ExecutorAllocationManager.ExecutorAllocationManagerSource
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IntervalLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveBroadcast
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.TreeEnsembleModel.SaveLoadV1_0.Metadata$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IntervalFieldContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ArithmeticOperatorContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.MultiInsertQueryBodyContext
[WARN] Unable to detect inner functions for class:org.apache.spark.network.crypto.TransportCipher.DecryptionHandler
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveBlock
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions.DslAttribute
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.ExternalShuffleBlockResolver.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.FileStreamSource.SeenFilesMap
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RowFormatSerdeContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.debug.DebugExec.$SetAccumulator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.JoinCriteriaContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ValueExpressionDefaultContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LDAWrapper.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.MultilayerPerceptronClassifierWrapper.MultilayerPerceptronClassifierWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator19$1
Error instrumenting class:org.apache.spark.mllib.tree.model.TreeEnsembleModel$SaveLoadV1_0$Metadata
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.UnregisterApplication
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Link
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleExpressionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.LevelDBTypeInfo.1
[WARN] Unable to detect inner functions for class:org.apache.spark.SparkConf.AlternateConfig
Error instrumenting class:org.apache.spark.sql.execution.streaming.FileStreamSourceLog
[WARN] Unable to detect inner functions for class:org.apache.spark.util.random.StratifiedSamplingUtils.RandomDataGenerator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.LongType.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.NaiveBayesModel.NaiveBayesModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.api.python.SerDeUtil.AutoBatchedPickler
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.AFTSurvivalRegressionModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.param.shared.SharedParamsCodeGen.ParamDesc
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.CountVectorizerModelWriter.$$typecreator3$1
Error instrumenting class:org.apache.spark.ui.ServerInfo$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.IsotonicRegressionWrapper.IsotonicRegressionWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.$$typecreator38$1
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.GetExecutorLossReason
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.StorageStatus.NonRddStorageInfo
Error instrumenting class:org.apache.spark.sql.execution.streaming.FileStreamSink$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeMean
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRestModel.$$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.util.JsonProtocol.JOB_RESULT_FORMATTED_CLASS_NAMES
Error instrumenting class:org.apache.spark.ui.WebUI
[WARN] Unable to detect inner functions for class:org.apache.spark.status.KVUtils.KVStoreScalaSerializer
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.NormL1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PrimaryExpressionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.IdentityConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ArithmeticUnaryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.CLogLog
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCheckResult.TypeCheckSuccess
Error instrumenting class:org.apache.spark.sql.execution.streaming.CompactibleFileStreamLog
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette.ClusterStats
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexer.CategoryStats
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.FPGrowthModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalSorter.SpilledFile
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase.ValuesReaderIntIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.PCAModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SampleByBucketContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SearchedCaseContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetConfigurationContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.RevokedLeadership
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.RFormulaModel.RFormulaModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.BatchedWriteAheadLog.Record
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ColPositionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.util.QuantileSummaries.Stats
[WARN] Unable to detect inner functions for class:org.apache.spark.graphx.lib.SVDPlusPlus.Conf
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GaussianMixtureWrapper.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.RetryingBlockFetcher.$RetryingBlockFetchListener
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.DoubleType.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SampleByRowsContext
[WARN] Unable to detect inner functions for class:org.apache.spark.shuffle.sort.UnsafeShuffleWriter.$CloseAndFlushShieldOutputStream
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.orc.OrcDeserializer.CatalystDataUpdater
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NonReservedContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.ElectedLeader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ExistsContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.UncompressedInBlock
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.CommandBuilderUtils.JavaVendor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GBTRegressorWrapper.GBTRegressorWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RenameTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Interaction.$$typecreator2$1
Error instrumenting class:org.apache.spark.executor.ExecutorSource
Error instrumenting class:org.apache.spark.sql.execution.datasources.FileFormatWriter$
[WARN] Unable to detect inner functions for class:org.apache.spark.TestUtils.JavaSourceFromString
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.DistributedLDAModel.DistributedWriter
Error instrumenting class:org.apache.spark.sql.execution.datasources.json.MultiLineJsonDataSource$
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NullLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TruncateTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.AccumulatorParam.DoubleAccumulatorParam
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StarContext
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RequestExecutors
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.ChiSqSelectorModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowPartitionsContext
[WARN] Unable to detect inner functions for class:org.apache.spark.AccumulatorParam.LongAccumulatorParam
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.attribute.AttributeType.Nominal$2$
[WARN] Unable to detect inner functions for class:org.apache.spark.api.python.BasePythonRunner.ReaderIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.OutputCommitCoordinator.StageState
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestKillDriver
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SparkSession.Builder
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.python.EvaluatePython.RowPickler
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.TimSort.1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayes.$$typecreator9$1
Error instrumenting class:org.apache.spark.input.StreamRecordReader
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator6$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeExternalRowSorter.RowComparator
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeans.ClusterSummary
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.PassThrough.Decoder
Error instrumenting class:org.apache.spark.sql.execution.datasources.SQLHadoopMapReduceCommitProtocol
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.BasicOperators
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.DistributedLDAModel.DistributedLDAModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.LaunchTask
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.BoundPortsRequest
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.v2.PushDownOperatorsToDataSource.FilterAndProject
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.DataSource.SourceInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.AbstractLauncher.ArgumentValidator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SimpleCaseContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.EnsembleModelReadWrite.$typecreator11$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveWindowFrame
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NestedConstantListContext
Error instrumenting class:org.apache.spark.api.python.JavaToWritableConverter
Error instrumenting class:org.apache.spark.sql.execution.datasources.PartitionDirectory$
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterClusterManager
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Family
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QueryOrganizationContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.FamilyAndLink
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkDirCleanup
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.NaiveBayesModel.NaiveBayesModelWriter.Data
Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.$SpillableIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalAppendOnlyMap.ExternalIterator.$StreamBuffer
[WARN] Unable to detect inner functions for class:org.apache.spark.api.python.BasePythonRunner.WriterThread
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.MaxAbsScalerModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.rpc.netty.RpcEndpointVerifier.CheckExistence
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.PipelineModel.PipelineModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.mesos.MesosExternalShuffleClient.$Heartbeater
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel.MultilayerPerceptronClassificationModelWriter.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRest.OneVsRestReader
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.ExternalShuffleBlockHandler.$ManagedBufferIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ProbabilisticClassificationModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator7$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.CatalogImpl.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.NaiveBayesModel.NaiveBayesModelWriter.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.NNLSSolver
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeans.ClusterSummary
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.DecisionTreeRegressionModel.DecisionTreeRegressionModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.network.util.NettyUtils.1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateTableLikeContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.python.MLSerDe.DenseVectorPickler
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ColumnPruner.ColumnPrunerWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.InMemoryStore.InMemoryIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.api.python.SerDeUtil.ByteArrayConstructor
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.LocalLDAModel.SaveLoadV1_0.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinSide
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.DictionaryEncoding.Encoder
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.catalog.InMemoryCatalog.TableDesc
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.dstream.FileInputDStream.FileInputDStreamCheckpointData
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinHashLSHModel.MinHashLSHModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SkewSpecContext
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.GetExecutorLossReason
Error instrumenting class:org.apache.spark.mllib.clustering.GaussianMixtureModel$SaveLoadV1_0$Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.aggregate.ApproxCountDistinctForIntervals.LongArrayInternalRow
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.BooleanDefaultContext
Error instrumenting class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$StoreFile
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.Projection
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.RandomForestRegressorWrapper.RandomForestRegressorWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.AccumulatorParam.IntAccumulatorParam
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.LinearRegressionModel.LinearRegressionModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.Word2VecModel.SaveLoadV1_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherBackend.BackendConnection
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetIntDictionaryAwareDecimalConverter
Error instrumenting class:org.apache.spark.mllib.clustering.DistributedLDAModel$SaveLoadV1_0$EdgeData
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.PrefixSpan.Prefix
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.BooleanType.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetExecutorEndpointRef
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.HintStatementContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LinearSVCWrapper.LinearSVCWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.rdd.JdbcRDD.ConnectionFactory
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.OrderedIdentifierListContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeKVExternalSorter.KVComparator
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAssembler.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.annotation.InterfaceStability.Stable
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateValueWatermarkPredicate
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveWindowOrder
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.BlacklistTracker.ExecutorFailureList.TaskId
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.TreeEnsembleModel.SaveLoadV1_0.$typecreator1$1
Error instrumenting class:org.apache.spark.sql.execution.datasources.json.TextInputJsonDataSource$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.optim.WeightedLeastSquares.Solver
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TablePropertyValueContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NamedExpressionSeqContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator11$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FunctionIdentifierContext
[WARN] Unable to detect inner functions for class:org.apache.spark.network.util.LevelDBProvider.1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.AddWebUIFilter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AddTableColumnsContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleDataTypeContext
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutorFailed
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.joins.BuildLeft
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Wildcard
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetTablePropertiesContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator12$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCheckResult.TypeCheckFailure
[WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SaslEncryption.EncryptionHandler
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionBase.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorSizeHint.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.FailureFetchResult
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.WindowFrameCoercion
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveAliases
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.SignedPrefixComparatorNullsLast
Error instrumenting class:org.apache.spark.sql.execution.datasources.InMemoryFileIndex$
[WARN] Unable to detect inner functions for class:org.apache.spark.serializer.SerializationDebugger.ListObjectOutput
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherServer.$ServerConnection
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RowFormatContext
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetLocationsAndStatus
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.FPTree.Node
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions.DslString
Error instrumenting class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore
Error instrumenting class:org.apache.spark.api.python.TestOutputValueConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.RadixSortSupport
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator6$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.PartitioningUtils.PartitionValues
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterStateResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ExtractGenerator.AliasedGenerator$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.PipelineModel.PipelineModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.AFTSurvivalRegressionWrapper.AFTSurvivalRegressionWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV1_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator15$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StatementDefaultContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IndexToString.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveGenerate
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.FlatMapGroupsWithStateExec.StateStoreUpdater
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ReconnectWorker
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Binomial
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArray.ExternalAppendOnlyUnsafeRowArrayIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.BlockLocationsAndStatus
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QualifiedNameContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions.StringToAttributeConversionHelper
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.MinMaxScalerModelWriter.Data
Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.PullOutNondeterministic
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.DriverStateChanged
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.JoinRelationContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FirstContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.KMeansModelReader.$$typecreator6$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InsertIntoTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetTableLocationContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.LogicalBinaryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.ByteAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.FileFormatWriter.WriteTaskResult
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.rules.RuleExecutor.Batch
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetStorageStatus
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.rdd.NewHadoopRDD.NewHadoopMapPartitionsWithSplitRDD
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.UnsafeInMemorySorter.1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.StateStoreOps
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.util.QuantileSummaries.Stats
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.$typecreator7$1
[WARN] Unable to detect inner functions for class:org.apache.spark.AccumulatorParam.StringAccumulatorParam
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorSizeHint.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.EdgeData$
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDe.DenseMatrixPickler
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SubscriptContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.api.r.SQLUtils.RegexContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterChangeAcknowledged
[WARN] Unable to detect inner functions for class:org.apache.spark.api.python.PythonWorkerFactory.MonitorThread
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.ResolveTableValuedFunctions.ArgumentList
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FunctionTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QueryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinHashLSHModel.MinHashLSHModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerStateResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionModel.IsotonicRegressionModelWriter.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.IsotonicRegressionModel.SaveLoadV1_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.DiskBlockObjectWriter.ManualCloseOutputStream
Error instrumenting class:org.apache.spark.streaming.StreamingContext$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.FeatureHasher.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeWeightSum
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterWorkerResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.DecimalAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0.Data$
Error instrumenting class:org.apache.spark.sql.catalyst.catalog.InMemoryCatalog$
Error instrumenting class:org.apache.spark.streaming.api.java.JavaStreamingContext$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinConditionSplitPredicates
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.recommendation.MatrixFactorizationModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.DecimalConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SaslEncryption.DecryptionHandler
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.InMemoryStore.InstanceList
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.Metric
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.FileFormatWriter.OutputSpec
[WARN] Unable to detect inner functions for class:org.apache.spark.serializer.SerializationDebugger.NullOutputStream
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillDriverResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.codegen.CodegenContext.MutableStateArrays
[WARN] Unable to detect inner functions for class:org.apache.spark.rdd.PipedRDD.NotEqualsFileNameFilter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableNameContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.codegen.DumpByteCode
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DropTablePartitionsContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.Variance
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.kafka010.KafkaRDD.KafkaRDDIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.SparkSubmitUtils.MavenCoordinate
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateTempViewUsingContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.BeginRecovery
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.PythonMLLibAPI.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.rpc.netty.NettyRpcEnv.FileDownloadCallback
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.StringConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.$$typecreator30$1
Error instrumenting class:org.apache.spark.sql.execution.datasources.csv.CSVFileFormat
[WARN] Unable to detect inner functions for class:org.apache.spark.util.Benchmark.Result
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateWatermarkPredicates
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.AFTSurvivalRegressionModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.KVTypeInfo.$FieldAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Power
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Word2VecModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.$$typecreator14$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.GBTClassificationModel.GBTClassificationModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAttributeRewriter.VectorAttributeRewriterWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.serializer.DummySerializerInstance.$1
Error instrumenting class:org.apache.spark.input.ConfigurableCombineFileRecordReader
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.FetchRequest
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.FPTree.Summary
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateFileFormatContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAttributeRewriter.VectorAttributeRewriterWriter.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.scheduler.JobScheduler.JobHandler
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Named
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.$$typecreator38$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.RandomForestRegressionModel.RandomForestRegressionModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.HandleNullInputsForUDF
[WARN] Unable to detect inner functions for class:org.apache.spark.network.protocol.Message.Type
[WARN] Unable to detect inner functions for class:org.apache.spark.graphx.impl.VertexPartition.VertexPartitionOpsConstructor
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.FileFormatWriter.WriteTaskResult
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.OneForOneBlockFetcher.$DownloadCallback
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.GeneralizedLinearRegressionModelWriter.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.internal.io.FileCommitProtocol.EmptyTaskCommitMessage
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.LevelDB.TypeAliases
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.DecisionTreeRegressionModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.FileFormatWriter.ExecuteWriteTask
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeMetric
Error instrumenting class:org.apache.spark.sql.execution.streaming.SinkFileStatus$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetArrayConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.ChiSqSelectorModelWriter.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinHashLSHModel.MinHashLSHModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.StackCoercion
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ColumnPruner.ColumnPrunerWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.api.python.BasePythonRunner.MonitorThread
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.VectorIndexerModelWriter.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.PowerIterationClusteringModel.SaveLoadV1_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.GlobalAggregates
Error instrumenting class:org.apache.spark.serializer.SerializationDebugger$ObjectStreamClassMethods$
[WARN] Unable to detect inner functions for class:org.apache.spark.util.SizeEstimator.SearchState
[WARN] Unable to detect inner functions for class:org.apache.spark.InternalAccumulator.input
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.ExternalShuffleBlockHandler.$ShuffleMetrics
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateWatermarkPredicate
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LinearSVCModel.LinearSVCReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.orc.OrcDeserializer.RowUpdater
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.FileFormatWriter.WriteJobDescription
Error instrumenting class:org.apache.spark.status.api.v1.ApiRootResource$
[WARN] Unable to detect inner functions for class:org.apache.spark.InternalAccumulator.shuffleRead
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.WindowFrameContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ExecutorUpdated
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.UDTConverter
Error instrumenting class:org.apache.spark.mllib.clustering.LocalLDAModel$SaveLoadV1_0$Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.RandomForestRegressorWrapper.RandomForestRegressorWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InlineTableDefault1Context
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.aggregate.HashMapGenerator.Buffer
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Word2VecModelWriter.$$typecreator10$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.TimestampType.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LogisticRegressionModel.LogisticRegressionModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyAndNumValues
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.EnsembleModelReadWrite.EnsembleNodeData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveAggregateFunctions
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InlineTableDefault2Context
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutors
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.DecisionTreeClassifierWrapper.DecisionTreeClassifierWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ReregisterWithMaster
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.TimestampAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.UnsafeInMemorySorter.SortComparator
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.Rating
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.receiver.ReceiverSupervisor.ReceiverState
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RenameTablePartitionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.security.CryptoStreamUtils.CryptoParams
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherProtocol.Stop
[WARN] Unable to detect inner functions for class:org.apache.spark.util.sketch.BloomFilter.Version
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NumberContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.VectorizedRleValuesReader.MODE
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.KeyWrapper
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.LongConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AlterViewQueryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.stat.test.ChiSqTest.NullHypothesis
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DescribeFuncNameContext
Error instrumenting class:org.apache.spark.SparkContext$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GaussianMixtureWrapper.GaussianMixtureWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.NormalEquation
Error instrumenting class:org.apache.spark.sql.execution.datasources.CodecStreams$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLContext.implicits
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.AFTSurvivalRegressionModelWriter.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.OpenHashMapBasedStateMap.StateInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleStatementContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.BooleanLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.UncompressedInBlockBuilder
[WARN] Unable to detect inner functions for class:org.apache.spark.status.ElementTrackingStore.Trigger
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisteredApplication
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.BlacklistTracker.ExecutorFailureList.TaskId
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRest.OneVsRestWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.aggregate.ApproximatePercentile.PercentileDigestSerializer
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RefreshResourceContext
[WARN] Unable to detect inner functions for class:org.apache.spark.unsafe.map.HashMapGrowthStrategy.Doubling
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LocalLDAModel.LocalLDAModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.$$typecreator6$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.impl.RandomForest.NodeIndexInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.OneHotEncoderModel.OneHotEncoderModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0.$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleFunctionIdentifierContext
[WARN] Unable to detect inner functions for class:org.apache.spark.status.KVUtils.MetadataMismatchException
[WARN] Unable to detect inner functions for class:org.apache.spark.unsafe.map.BytesToBytesMap.$MapIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.optim.QuasiNewtonSolver.NormalEquationCostFun
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.Mean
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowTablesContext
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetLocationsMultipleBlockIds
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.CountVectorizerModelWriter.Data
Error instrumenting class:org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RequestExecutors
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.BeginRecovery
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ApplicationRemoved
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ColumnPruner.ColumnPrunerWriter.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.StateStoreHandler
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.DecisionTreeClassificationModel.DecisionTreeClassificationModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.DecisionTreeModelReadWrite.$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerSchedulerStateResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.OpenHashSet.LongHasher
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestMasterState
Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport
Error instrumenting class:org.apache.spark.ui.SparkUI
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveWorker
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.aggregate.DeclarativeAggregate.RichAttribute
Error instrumenting class:org.apache.spark.sql.catalyst.catalog.ExternalCatalogUtils$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.SizeTracker.Sample
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ConstantListContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillTask
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherProtocol.SetAppId
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.ToBlockManagerMaster
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRestModel.OneVsRestModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeL1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.ConcatCoercion
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ColumnPruner.ColumnPrunerReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveUpCast
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolvePivot
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArray.InMemoryBufferIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.stat.FrequentItems.FreqItemCounter
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.StringPrefixComparator
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.BatchedWriteAheadLog.Record
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.Word2VecModel.SaveLoadV1_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.GenericFileFormatContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetTableSerDeContext
[WARN] Unable to detect inner functions for class:org.apache.spark.shuffle.sort.UnsafeShuffleWriter.MyByteArrayOutputStream
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.WindowRefContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowTblPropertiesContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LocalLDAModel.LocalLDAModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.UDTConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.ShortConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.IfCoercion
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetStringConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.streaming.StreamingQueryListener.Event
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.FPGrowth.FreqItemset
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.RandomForestClassificationModel.RandomForestClassificationModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterApplication
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.NormL2
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeMin
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.history.AppListingListener.MutableApplicationInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StatementContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AliasedQueryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.LaunchDriver
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.Decimal.DecimalIsConflicted
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.GaussianMixtureModelWriter.$$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.IDFModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManager.RemoteBlockTempFileManager.$ReferenceWithCleanup
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SaslEncryption.EncryptedMessage
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.OutputCommitCoordinator.StageState
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.ExternalShuffleBlockResolver.AppExecId
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetBlockStatus
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SampleContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator14$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PositionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.IntConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DecimalLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDe.SparseVectorPickler
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Word2VecModelReader.$$typecreator17$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.attribute.AttributeType.Unresolved$2$
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.GaussianMixtureModel.SaveLoadV1_0.Data$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.FileStreamSource.FileEntry
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.BigDecimalLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.rpc.netty.Dispatcher.MessageLoop
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetPeers
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Gamma
Error instrumenting class:org.apache.spark.input.WholeTextFileRecordReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.RatingBlockBuilder
Error instrumenting class:org.apache.spark.internal.io.HadoopMapReduceCommitProtocol
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.StringToColumn
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveBroadcast
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.PredictData
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LocalLDAModel.LocalLDAModelWriter.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.MinMaxScalerModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RowFormatDelimitedContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherProtocol.SetState
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.LocalPrefixSpan.ReversedPrefix
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.OneHotEncoderModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.BoundPortsResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator11$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.GaussianMixtureModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.RandomForestModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.FloatAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ExecutorStateChanged
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.LinearRegressionModel.LinearRegressionModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalAppendOnlyMap.SpillableIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyWithIndexToValueType
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.ReplicateBlock
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator15$1
[WARN] Unable to detect inner functions for class:org.apache.spark.network.server.TransportRequestHandler.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.PrefixSpanModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ApplicationFinished
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GBTClassifierWrapper.GBTClassifierWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PrimitiveDataTypeContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.DecimalType.Fixed
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DescribeFunctionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.StorageStatus.RddStorageInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.LogisticRegressionWithLBFGS.$$typecreator1$1
Error instrumenting class:org.apache.spark.sql.execution.datasources.text.TextFileFormat
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowCreateTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Word2VecModelWriter.$$typecreator9$1
Created : .generated-mima-class-excludes in current directory.
Created : .generated-mima-member-excludes in current directory.
Using /usr/java/jdk1.8.0_191 as default JAVA_HOME.
Note, this will be overridden by -java-home if it is set.
[info] Loading project definition from /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/project
[info] Set current project to spark-parent (in build file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/)
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}tools...
[info] spark-parent: previous-artifact not set, not analyzing binary compatibility
[info] spark-tags: previous-artifact not set, not analyzing binary compatibility
[info] Done updating.
[info] spark-tools: previous-artifact not set, not analyzing binary compatibility
[info] spark-streaming-flume-sink: previous-artifact not set, not analyzing binary compatibility
[info] spark-kvstore: previous-artifact not set, not analyzing binary compatibility
[info] spark-unsafe: previous-artifact not set, not analyzing binary compatibility
[info] spark-network-common: previous-artifact not set, not analyzing binary compatibility
[info] spark-network-shuffle: previous-artifact not set, not analyzing binary compatibility
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scala-lang/scala-library/2.11.8/scala-library-2.11.8.jar ...
[info] 	[SUCCESSFUL ] org.scala-lang#scala-library;2.11.8!scala-library.jar (861ms)
[info] spark-launcher: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-launcher_2.11:2.3.0  (filtered 1)
[info] spark-sketch: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-sketch_2.11:2.3.0  (filtered 1)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scala-lang/scala-reflect/2.11.8/scala-reflect-2.11.8.jar ...
[info] 	[SUCCESSFUL ] org.scala-lang#scala-reflect;2.11.8!scala-reflect.jar (667ms)
[info] spark-network-yarn: previous-artifact not set, not analyzing binary compatibility
[info] spark-mllib-local: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-mllib-local_2.11:2.3.0  (filtered 1)
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:161: method isRunningLocally in class TaskContext is deprecated: Local execution was removed, so this always returns false
[warn]   override def isRunningLocally(): Boolean = taskContext.isRunningLocally()
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/scheduler/StageInfo.scala:59: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]   def attemptNumber(): Int = attemptId
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/api/r/RBackend.scala:102: method childGroup in class ServerBootstrap is deprecated: see corresponding Javadoc for more information.
[warn]     if (bootstrap != null && bootstrap.childGroup() != null) {
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/spark-core_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/avro/avro/1.7.7/avro-1.7.7.jar ...
[info] 	[SUCCESSFUL ] org.apache.avro#avro;1.7.7!avro.jar (445ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scala-lang/scalap/2.11.0/scalap-2.11.0.jar ...
[info] 	[SUCCESSFUL ] org.scala-lang#scalap;2.11.0!scalap.jar (482ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scala-lang/scala-compiler/2.11.0/scala-compiler-2.11.0.jar ...
[info] 	[SUCCESSFUL ] org.scala-lang#scala-compiler;2.11.0!scala-compiler.jar (1115ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scala-lang/modules/scala-xml_2.11/1.0.1/scala-xml_2.11-1.0.1.jar ...
[info] 	[SUCCESSFUL ] org.scala-lang.modules#scala-xml_2.11;1.0.1!scala-xml_2.11.jar(bundle) (361ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scala-lang/modules/scala-parser-combinators_2.11/1.0.1/scala-parser-combinators_2.11-1.0.1.jar ...
[info] 	[SUCCESSFUL ] org.scala-lang.modules#scala-parser-combinators_2.11;1.0.1!scala-parser-combinators_2.11.jar(bundle) (436ms)
[info] spark-ganglia-lgpl: previous-artifact not set, not analyzing binary compatibility
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] spark-yarn: previous-artifact not set, not analyzing binary compatibility
[info] spark-kubernetes: previous-artifact not set, not analyzing binary compatibility
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:87: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       fwInfoBuilder.setRole(role)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:224: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]     role.foreach { r => builder.setRole(r) }
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:260: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]             Option(r.getRole), reservation)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:263: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]             Option(r.getRole), reservation)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:521: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       role.foreach { r => builder.setRole(r) }
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:537: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]         (RoleResourceInfo(resource.getRole, reservation),
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] spark-mesos: previous-artifact not set, not analyzing binary compatibility
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] spark-catalyst: previous-artifact not set, not analyzing binary compatibility
[info] spark-graphx: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-graphx_2.11:2.3.0  (filtered 3)
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:259: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val dstream = FlumeUtils.createStream(jssc, hostname, port, storageLevel, enableDecompression)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:275: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val dstream = FlumeUtils.createPollingStream(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createPollingStream(ssc, host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createPollingStream(ssc, host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumeEventCount.scala:59: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createStream(ssc, host, port, StorageLevel.MEMORY_ONLY_SER_2)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/DirectKafkaInputDStream.scala:172: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]     val msgs = c.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:100: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:153: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaDataConsumer.scala:200: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     val p = consumer.poll(timeout)
[warn] 
[info] spark-streaming-flume: previous-artifact not set, not analyzing binary compatibility
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:140: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]   private val client = new AmazonKinesisClient(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:150: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]   client.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:219: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information.
[warn]     getRecordsRequest.setRequestCredentials(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:240: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information.
[warn]     getShardIteratorRequest.setRequestCredentials(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:606: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]       KinesisUtils.createStream(jssc.ssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:613: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]         KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:616: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]         KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:107: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val kinesisClient = new AmazonKinesisClient(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:108: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     kinesisClient.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:223: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val kinesisClient = new AmazonKinesisClient(new DefaultAWSCredentialsProviderChain())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:224: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     kinesisClient.setEndpoint(endpoint)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/SparkAWSCredentials.scala:76: method withLongLivedCredentialsProvider in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       .withLongLivedCredentialsProvider(longLivedCreds.provider)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:58: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val client = new AmazonKinesisClient(KinesisTestUtils.getAWSCredentials())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:59: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     client.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:64: constructor AmazonDynamoDBClient in class AmazonDynamoDBClient is deprecated: see corresponding Javadoc for more information.
[warn]     val dynamoDBClient = new AmazonDynamoDBClient(new DefaultAWSCredentialsProviderChain())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:65: method setRegion in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     dynamoDBClient.setRegion(RegionUtils.getRegion(regionName))
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisReceiver.scala:187: constructor Worker in class Worker is deprecated: see corresponding Javadoc for more information.
[warn]     worker = new Worker(recordProcessorFactory, kinesisClientLibConfiguration)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] spark-streaming-kinesis-asl: previous-artifact not set, not analyzing binary compatibility
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:633: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder](
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:647: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:648: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker]): JavaRDD[(Array[Byte], Array[Byte])] = {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:657: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:658: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker]): JavaRDD[Array[Byte]] = {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:670: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:671: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:673: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createRDD[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V](
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:676: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges.toArray(new Array[OffsetRange](offsetRanges.size())),
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:720: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val kc = new KafkaCluster(Map(kafkaParams.asScala.toSeq: _*))
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:721: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       KafkaUtils.getFromOffsets(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:725: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createDirectStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V](
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def createBroker(host: String, port: JInt): Broker = Broker(host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: object Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def createBroker(host: String, port: JInt): Broker = Broker(host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:740: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def offsetRangesOfKafkaRDD(rdd: RDD[_]): JList[OffsetRange] = {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:89: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   protected val kc = new KafkaCluster(kafkaParams)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:172: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       OffsetRange(tp.topic, tp.partition, fo, uo.offset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:217: object KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val leaders = KafkaCluster.checkErrors(kc.findLeaders(topics))
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:222: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]            context.sparkContext, kafkaParams, b.map(OffsetRange(_)), leaders, messageHandler)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:58: trait HasOffsetRanges in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   ) extends RDD[R](sc, Nil) with Logging with HasOffsetRanges {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val offsetRanges: Array[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val offsetRanges: Array[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:152: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val kc = new KafkaCluster(kafkaParams)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:268: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]         OffsetRange(tp.topic, tp.partition, fo, uo.offset)
[warn] 
[info] spark-streaming-kafka-0-8: previous-artifact not set, not analyzing binary compatibility
[info] spark-streaming: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-streaming_2.11:2.3.0  (filtered 3)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/slf4j/slf4j-api/1.7.21/slf4j-api-1.7.21.jar ...
[info] 	[SUCCESSFUL ] org.slf4j#slf4j-api;1.7.21!slf4j-api.jar (274ms)
[info] spark-streaming-kafka-0-10-assembly: previous-artifact not set, not analyzing binary compatibility
[info] spark-streaming-flume-assembly: previous-artifact not set, not analyzing binary compatibility
[info] spark-streaming-kinesis-asl-assembly: previous-artifact not set, not analyzing binary compatibility
[info] spark-streaming-kafka-0-8-assembly: previous-artifact not set, not analyzing binary compatibility
[info] spark-streaming-kafka-0-10: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-streaming-kafka-0-10_2.11:2.3.0  (filtered 6)
[info] spark-core: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-core_2.11:2.3.0  (filtered 902)
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:128: value ENABLE_JOB_SUMMARY in object ParquetOutputFormat is deprecated: see corresponding Javadoc for more information.
[warn]       && conf.get(ParquetOutputFormat.ENABLE_JOB_SUMMARY) == null) {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:360: class ParquetInputSplit in package hadoop is deprecated: see corresponding Javadoc for more information.
[warn]         new org.apache.parquet.hadoop.ParquetInputSplit(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:371: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]         ParquetFileReader.readFooter(sharedConf, filePath, SKIP_ROW_GROUPS).getFileMetaData
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:544: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]           ParquetFileReader.readFooter(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[info] spark-avro: previous-artifact not set, not analyzing binary compatibility
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:119: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]     consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:139: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:187: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]       consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:217: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]       consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:293: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]           consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaDataConsumer.scala:470: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     val p = consumer.poll(pollTimeoutMs)
[warn] 
[info] spark-sql-kafka-0-10: previous-artifact not set, not analyzing binary compatibility
[info] spark-hive: previous-artifact not set, not analyzing binary compatibility
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:138: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn] object OneHotEncoder extends DefaultParamsReadable[OneHotEncoder] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:141: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]   override def load(path: String): OneHotEncoder = super.load(path)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:53: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path
[warn]       if (addedClasspath != "") {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:54: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path
[warn]         settings.classpath append addedClasspath
[warn] 
[info] spark-repl: previous-artifact not set, not analyzing binary compatibility
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] spark-hive-thriftserver: previous-artifact not set, not analyzing binary compatibility
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/assembly/target/scala-2.11/spark-assembly_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[info] spark-assembly: previous-artifact not set, not analyzing binary compatibility
[info] Compiling 1 Scala source to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/examples/target/scala-2.11/classes...
[info] spark-mllib: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-mllib_2.11:2.3.0  (filtered 513)
[info] spark-sql: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-sql_2.11:2.3.0  (filtered 291)
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/examples/target/scala-2.11/spark-examples_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[info] spark-examples: previous-artifact not set, not analyzing binary compatibility
[success] Total time: 47 s, completed Jul 11, 2020 6:53:05 AM
[info] Building Spark assembly (w/Hive 1.2.1) using SBT with these arguments:  -Phadoop-2.6 -Pkubernetes -Phive-thriftserver -Pflume -Pkinesis-asl -Pyarn -Pkafka-0-8 -Pspark-ganglia-lgpl -Phive -Pmesos assembly/package
Using /usr/java/jdk1.8.0_191 as default JAVA_HOME.
Note, this will be overridden by -java-home if it is set.
[info] Loading project definition from /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/project
[info] Set current project to spark-parent (in build file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/)
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:161: method isRunningLocally in class TaskContext is deprecated: Local execution was removed, so this always returns false
[warn]   override def isRunningLocally(): Boolean = taskContext.isRunningLocally()
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/scheduler/StageInfo.scala:59: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]   def attemptNumber(): Int = attemptId
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/api/r/RBackend.scala:102: method childGroup in class ServerBootstrap is deprecated: see corresponding Javadoc for more information.
[warn]     if (bootstrap != null && bootstrap.childGroup() != null) {
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/spark-core_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:87: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       fwInfoBuilder.setRole(role)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:224: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]     role.foreach { r => builder.setRole(r) }
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:260: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]             Option(r.getRole), reservation)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:263: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]             Option(r.getRole), reservation)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:521: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       role.foreach { r => builder.setRole(r) }
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:537: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]         (RoleResourceInfo(resource.getRole, reservation),
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:128: value ENABLE_JOB_SUMMARY in object ParquetOutputFormat is deprecated: see corresponding Javadoc for more information.
[warn]       && conf.get(ParquetOutputFormat.ENABLE_JOB_SUMMARY) == null) {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:360: class ParquetInputSplit in package hadoop is deprecated: see corresponding Javadoc for more information.
[warn]         new org.apache.parquet.hadoop.ParquetInputSplit(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:371: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]         ParquetFileReader.readFooter(sharedConf, filePath, SKIP_ROW_GROUPS).getFileMetaData
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:544: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]           ParquetFileReader.readFooter(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:138: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn] object OneHotEncoder extends DefaultParamsReadable[OneHotEncoder] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:141: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]   override def load(path: String): OneHotEncoder = super.load(path)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:53: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path
[warn]       if (addedClasspath != "") {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:54: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path
[warn]         settings.classpath append addedClasspath
[warn] 
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/assembly/target/scala-2.11/jars/spark-assembly_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[success] Total time: 23 s, completed Jul 11, 2020 6:53:41 AM

========================================================================
Running Java style checks
========================================================================
Checkstyle checks passed.

========================================================================
Running Spark unit tests
========================================================================
[info] Running Spark tests using SBT with these arguments:  -Phadoop-2.6 -Pkubernetes -Pflume -Phive-thriftserver -Pyarn -Pkafka-0-8 -Pspark-ganglia-lgpl -Pkinesis-asl -Phive -Pmesos test
Using /usr/java/jdk1.8.0_191 as default JAVA_HOME.
Note, this will be overridden by -java-home if it is set.
[info] Loading project definition from /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/project
[info] Set current project to spark-parent (in build file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/)
[info] ScalaTest
[info] Run completed in 56 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] ScalaTest
[info] Run completed in 13 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] ScalaTest
[info] Run completed in 25 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] BitArraySuite:
[info] - error case when create BitArray (18 milliseconds)
[info] - bitSize (4 milliseconds)
[info] - set (2 milliseconds)
[info] - normal operation (17 milliseconds)
[info] - merge (24 milliseconds)
[info] BloomFilterSuite:
[info] - accuracy - Byte (9 milliseconds)
[info] - mergeInPlace - Byte (6 milliseconds)
[info] - accuracy - Short (9 milliseconds)
[info] - mergeInPlace - Short (11 milliseconds)
[info] - accuracy - Int (80 milliseconds)
[info] - mergeInPlace - Int (154 milliseconds)
[info] - accuracy - Long (64 milliseconds)
[info] - mergeInPlace - Long (176 milliseconds)
[info] Test run started
[info] Test org.apache.spark.launcher.InProcessLauncherSuite.testKill started
[info] Test org.apache.spark.launcher.InProcessLauncherSuite.testLauncher started
[info] Test org.apache.spark.launcher.InProcessLauncherSuite.testErrorPropagation started
[info] ScalaTest
[info] Run completed in 758 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] Test run finished: 0 failed, 0 ignored, 3 total, 0.227s
[info] Test run started
[info] Test org.apache.spark.launcher.SparkSubmitOptionParserSuite.testMissingArg started
[info] SparkSinkSuite:
[info] Test org.apache.spark.launcher.SparkSubmitOptionParserSuite.testAllOptions started
[info] Test org.apache.spark.launcher.SparkSubmitOptionParserSuite.testEqualSeparatedOption started
[info] Test org.apache.spark.launcher.SparkSubmitOptionParserSuite.testExtraOptions started
[info] Test run finished: 0 failed, 0 ignored, 4 total, 0.205s
[info] Test run started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testCliParser started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testPySparkLauncher started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testAlternateSyntaxParsing started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testExamplesRunner started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testSparkRShell started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testMissingAppResource started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testShellCliParser started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testClusterCmdBuilder started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testDriverCmdBuilder started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testExamplesRunnerNoMainClass started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testCliKillAndStatus started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testExamplesRunnerNoArg started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testPySparkFallback started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testExamplesRunnerWithMasterNoMainClass started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testCliHelpAndNoArg started
[info] Test run finished: 0 failed, 0 ignored, 15 total, 0.09s
[info] Test run started
[info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testNoRedirectToLog started
[info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testProcMonitorWithOutputRedirection started
[info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testRedirectOutputToLog started
[info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testRedirectsSimple started
[info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testRedirectErrorToLog started
[info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testProcMonitorWithLogRedirection started
[info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testFailedChildProc started
[info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testRedirectErrorTwiceFails started
[info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testBadLogRedirect started
[info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testRedirectLastWins started
[info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testRedirectToLog started
[info] Test run finished: 0 failed, 0 ignored, 11 total, 0.138s
[info] Test run started
[info] Test org.apache.spark.launcher.CommandBuilderUtilsSuite.testValidOptionStrings started
[info] Test org.apache.spark.launcher.CommandBuilderUtilsSuite.testJavaMajorVersion started
[info] Test org.apache.spark.launcher.CommandBuilderUtilsSuite.testPythonArgQuoting started
[info] Test org.apache.spark.launcher.CommandBuilderUtilsSuite.testWindowsBatchQuoting started
[info] Test org.apache.spark.launcher.CommandBuilderUtilsSuite.testInvalidOptionStrings started
[info] Test run finished: 0 failed, 0 ignored, 5 total, 0.003s
[info] Test run started
[info] Test org.apache.spark.launcher.LauncherServerSuite.testTimeout started
[info] Test org.apache.spark.launcher.LauncherServerSuite.testStreamFiltering started
[info] Test org.apache.spark.launcher.LauncherServerSuite.testSparkSubmitVmShutsDown started
[info] Test org.apache.spark.launcher.LauncherServerSuite.testLauncherServerReuse started
[info] Test org.apache.spark.launcher.LauncherServerSuite.testAppHandleDisconnect started
[info] Test org.apache.spark.launcher.LauncherServerSuite.testCommunication started
[info] Test run finished: 0 failed, 0 ignored, 6 total, 0.156s
[info] Test run started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.copyIndexDescendingWithStart started
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:161: method isRunningLocally in class TaskContext is deprecated: Local execution was removed, so this always returns false
[warn]   override def isRunningLocally(): Boolean = taskContext.isRunningLocally()
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/scheduler/StageInfo.scala:59: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]   def attemptNumber(): Int = attemptId
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/api/r/RBackend.scala:102: method childGroup in class ServerBootstrap is deprecated: see corresponding Javadoc for more information.
[warn]     if (bootstrap != null && bootstrap.childGroup() != null) {
[warn] 
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.refIndexWithStart started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.numericIndexDescendingWithStart started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.copyIndexDescending started
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.childIndexWithStart started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.childIndexWithLast started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.childIndexWithSkip started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.childIndexWithMax started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.naturalIndexDescending started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.numericIndexDescendingWithLast started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.childIndexDescending started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.copyIndexDescendingWithLast started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.refIndex started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.numericIndexWithLast started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.copyIndexWithStart started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.childIndexDescendingWithStart started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.refIndexWithLast started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.refIndexWithSkip started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.numericIndexDescending started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.testRefWithIntNaturalKey started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.refIndexDescending started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.refIndexDescendingWithStart started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.naturalIndexWithMax started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.naturalIndex started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.copyIndexWithLast started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.copyIndexWithSkip started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.copyIndexWithMax started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.childIndexDescendingWithLast started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.refIndexDescendingWithLast started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.naturalIndexDescendingWithStart started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.childIndex started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.naturalIndexWithLast started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.naturalIndexWithSkip started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.naturalIndexWithStart started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.copyIndex started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.naturalIndexDescendingWithLast started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.numericIndexWithStart started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.numericIndex started
[info] Test run finished: 0 failed, 0 ignored, 38 total, 0.268s
[info] Test run started
[info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testDuplicateIndex started
[info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testEmptyIndexName started
[info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testIndexAnnotation started
[info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testNumEncoding started
[info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testIllegalIndexMethod started
[info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testKeyClashes started
[info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testArrayIndices started
[info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testNoNaturalIndex2 started
[info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testIllegalIndexName started
[info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testNoNaturalIndex started
[info] Test run finished: 0 failed, 0 ignored, 10 total, 0.025s
[info] Test run started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.copyIndexDescendingWithStart started
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/spark-core_2.11-2.4.7-SNAPSHOT.jar ...
[info] Test run started
[info] Test org.apache.spark.network.crypto.AuthEngineSuite.testBadChallenge started
[info] - Success with ack (2 seconds, 147 milliseconds)
[info] Test org.apache.spark.network.crypto.AuthEngineSuite.testWrongAppId started
[info] Test org.apache.spark.network.crypto.AuthEngineSuite.testWrongNonce started
[info] Test org.apache.spark.network.crypto.AuthEngineSuite.testMismatchedSecret started
[info] Test org.apache.spark.network.crypto.AuthEngineSuite.testEncryptedMessage started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.refIndexWithStart started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.numericIndexDescendingWithStart started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.copyIndexDescending started
[info] Test org.apache.spark.network.crypto.AuthEngineSuite.testEncryptedMessageWhenTransferringZeroBytes started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.childIndexWithStart started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.childIndexWithLast started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.childIndexWithSkip started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.childIndexWithMax started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.naturalIndexDescending started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.numericIndexDescendingWithLast started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.childIndexDescending started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.copyIndexDescendingWithLast started
[info] - accuracy - String (3 seconds, 662 milliseconds)
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.refIndex started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.numericIndexWithLast started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.copyIndexWithStart started
[info] Test org.apache.spark.network.crypto.AuthEngineSuite.testAuthEngine started
[info] Test org.apache.spark.network.crypto.AuthEngineSuite.testBadKeySize started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.childIndexDescendingWithStart started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.refIndexWithLast started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.refIndexWithSkip started
[info] Test run finished: 0 failed, 0 ignored, 8 total, 1.46s
[info] Test run started
[info] Test org.apache.spark.network.server.OneForOneStreamManagerSuite.streamStatesAreFreedWhenConnectionIsClosedEvenIfBufferIteratorThrowsException started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.numericIndexDescending started
[info] Test org.apache.spark.network.server.OneForOneStreamManagerSuite.managedBuffersAreFeedWhenConnectionIsClosed started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.testRefWithIntNaturalKey started
[info] Test run finished: 0 failed, 0 ignored, 2 total, 0.192s
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.refIndexDescending started
[info] Test run started
[info] Test org.apache.spark.network.RequestTimeoutIntegrationSuite.furtherRequestsDelay started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.refIndexDescendingWithStart started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.naturalIndexWithMax started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.naturalIndex started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.copyIndexWithLast started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.copyIndexWithSkip started
[info] Done packaging.
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.copyIndexWithMax started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.childIndexDescendingWithLast started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.refIndexDescendingWithLast started
[info] - Failure with nack (1 second, 587 milliseconds)
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.naturalIndexDescendingWithStart started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.childIndex started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.naturalIndexWithLast started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.naturalIndexWithSkip started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.naturalIndexWithStart started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.copyIndex started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.naturalIndexDescendingWithLast started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.numericIndexWithStart started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.numericIndex started
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Test run finished: 0 failed, 0 ignored, 38 total, 3.058s
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:87: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       fwInfoBuilder.setRole(role)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:224: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]     role.foreach { r => builder.setRole(r) }
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:260: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]             Option(r.getRole), reservation)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:263: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]             Option(r.getRole), reservation)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:521: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       role.foreach { r => builder.setRole(r) }
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:537: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]         (RoleResourceInfo(resource.getRole, reservation),
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Test run started
[info] Test org.apache.spark.util.kvstore.ArrayWrappersSuite.testGenericArrayKey started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.002s
[info] Test run started
[info] Test org.apache.spark.util.kvstore.LevelDBBenchmark ignored
[info] Test run finished: 0 failed, 1 ignored, 0 total, 0.001s
[info] Test run started
[info] Test org.apache.spark.util.kvstore.InMemoryStoreSuite.testBasicIteration started
[info] Test org.apache.spark.util.kvstore.InMemoryStoreSuite.testObjectWriteReadDelete started
[info] Test org.apache.spark.util.kvstore.InMemoryStoreSuite.testMultipleObjectWriteReadDelete started
[info] Test org.apache.spark.util.kvstore.InMemoryStoreSuite.testMetadata started
[info] Test org.apache.spark.util.kvstore.InMemoryStoreSuite.testArrayIndices started
[info] Test org.apache.spark.util.kvstore.InMemoryStoreSuite.testUpdate started
[info] Test org.apache.spark.util.kvstore.InMemoryStoreSuite.testRemoveAll started
[info] Test run finished: 0 failed, 0 ignored, 7 total, 0.017s
[info] Test run started
[info] Test org.apache.spark.util.kvstore.LevelDBSuite.testMultipleTypesWriteReadDelete started
[info] Test org.apache.spark.util.kvstore.LevelDBSuite.testObjectWriteReadDelete started
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:259: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val dstream = FlumeUtils.createStream(jssc, hostname, port, storageLevel, enableDecompression)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:275: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val dstream = FlumeUtils.createPollingStream(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createPollingStream(ssc, host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createPollingStream(ssc, host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumeEventCount.scala:59: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createStream(ssc, host, port, StorageLevel.MEMORY_ONLY_SER_2)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Test org.apache.spark.util.kvstore.LevelDBSuite.testSkip started
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/DirectKafkaInputDStream.scala:172: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]     val msgs = c.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:100: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:153: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaDataConsumer.scala:200: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     val p = consumer.poll(timeout)
[warn] 
[info] Test org.apache.spark.util.kvstore.LevelDBSuite.testMultipleObjectWriteReadDelete started
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:140: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:633: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   private val client = new AmazonKinesisClient(credentials)
[warn] 
[warn]     KafkaUtils.createStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder](
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:150: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn] 
[warn]   client.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:647: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:219: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information.
[warn]       offsetRanges: JList[OffsetRange],
[warn]     getRecordsRequest.setRequestCredentials(credentials)
[warn] 
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:648: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:240: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information.
[warn]       leaders: JMap[TopicAndPartition, Broker]): JavaRDD[(Array[Byte], Array[Byte])] = {
[warn]     getShardIteratorRequest.setRequestCredentials(credentials)
[warn] 
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:657: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:606: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]       offsetRanges: JList[OffsetRange],
[warn]       KinesisUtils.createStream(jssc.ssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:658: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:613: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]       leaders: JMap[TopicAndPartition, Broker]): JavaRDD[Array[Byte]] = {
[warn]         KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:670: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:616: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]       offsetRanges: JList[OffsetRange],
[warn]         KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:671: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:107: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]       leaders: JMap[TopicAndPartition, Broker],
[warn]     val kinesisClient = new AmazonKinesisClient(credentials)
[warn] 
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:673: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:108: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     KafkaUtils.createRDD[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V](
[warn]     kinesisClient.setEndpoint(endpointUrl)
[warn] 
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:676: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:223: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]       offsetRanges.toArray(new Array[OffsetRange](offsetRanges.size())),
[warn]     val kinesisClient = new AmazonKinesisClient(new DefaultAWSCredentialsProviderChain())
[warn] 
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:720: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:224: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]       val kc = new KafkaCluster(Map(kafkaParams.asScala.toSeq: _*))
[warn]     kinesisClient.setEndpoint(endpoint)
[warn] 
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:721: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/SparkAWSCredentials.scala:76: method withLongLivedCredentialsProvider in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       KafkaUtils.getFromOffsets(
[warn]       .withLongLivedCredentialsProvider(longLivedCreds.provider)
[warn] 
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:58: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val client = new AmazonKinesisClient(KinesisTestUtils.getAWSCredentials())
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:725: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn] 
[warn]     KafkaUtils.createDirectStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V](
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:59: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn] 
[warn]     client.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:64: constructor AmazonDynamoDBClient in class AmazonDynamoDBClient is deprecated: see corresponding Javadoc for more information.
[warn]     ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset)
[warn]     val dynamoDBClient = new AmazonDynamoDBClient(new DefaultAWSCredentialsProviderChain())
[warn] 
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:65: method setRegion in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     dynamoDBClient.setRegion(RegionUtils.getRegion(regionName))
[warn]     ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset)
[warn] 
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisReceiver.scala:187: constructor Worker in class Worker is deprecated: see corresponding Javadoc for more information.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     worker = new Worker(recordProcessorFactory, kinesisClientLibConfiguration)
[warn]   def createBroker(host: String, port: JInt): Broker = Broker(host, port)
[warn] 
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: object Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def createBroker(host: String, port: JInt): Broker = Broker(host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:740: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def offsetRangesOfKafkaRDD(rdd: RDD[_]): JList[OffsetRange] = {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:89: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   protected val kc = new KafkaCluster(kafkaParams)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:172: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       OffsetRange(tp.topic, tp.partition, fo, uo.offset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:217: object KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val leaders = KafkaCluster.checkErrors(kc.findLeaders(topics))
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:222: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]            context.sparkContext, kafkaParams, b.map(OffsetRange(_)), leaders, messageHandler)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:58: trait HasOffsetRanges in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   ) extends RDD[R](sc, Nil) with Logging with HasOffsetRanges {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val offsetRanges: Array[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val offsetRanges: Array[OffsetRange],
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:152: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val kc = new KafkaCluster(kafkaParams)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:268: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]         OffsetRange(tp.topic, tp.partition, fo, uo.offset)
[warn] 
[info] ScalaTest
[info] Run completed in 16 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] ScalaTest
[info] Run completed in 15 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] ScalaTest
[info] Run completed in 13 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] ScalaTest
[info] Run completed in 17 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] Test org.apache.spark.util.kvstore.LevelDBSuite.testReopenAndVersionCheckDb started
[info] Test org.apache.spark.util.kvstore.LevelDBSuite.testMetadata started
[info] - Failure with timeout (2 seconds, 61 milliseconds)
[info] Test org.apache.spark.util.kvstore.LevelDBSuite.testUpdate started
[info] Test org.apache.spark.util.kvstore.LevelDBSuite.testRemoveAll started
[info] Test org.apache.spark.util.kvstore.LevelDBSuite.testNegativeIndexValues started
[info] Test run finished: 0 failed, 0 ignored, 9 total, 2.114s
[info] - mergeInPlace - String (3 seconds, 500 milliseconds)
[info] - incompatible merge (2 milliseconds)
[info] CountMinSketchSuite:
[info] - accuracy - Byte (371 milliseconds)
[info] - mergeInPlace - Byte (260 milliseconds)
[info] Test run started
[info] - Multiple consumers (2 seconds, 249 milliseconds)
[info] - accuracy - Short (967 milliseconds)
[info] - mergeInPlace - Short (281 milliseconds)
[info] Test org.apache.spark.network.shuffle.ExternalShuffleIntegrationSuite.testFetchUnregisteredExecutor started
[info] - accuracy - Int (677 milliseconds)
[info] - mergeInPlace - Int (237 milliseconds)
[info] - Multiple consumers with some failures (1 second, 901 milliseconds)
[info] Test org.apache.spark.network.shuffle.ExternalShuffleIntegrationSuite.testFetchWrongExecutor started
[info] - accuracy - Long (886 milliseconds)
[info] Test org.apache.spark.network.shuffle.ExternalShuffleIntegrationSuite.testFetchNoServer started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleIntegrationSuite.testRegisterInvalidExecutor started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleIntegrationSuite.testFetchThreeSort started
[info] - mergeInPlace - Long (365 milliseconds)
[info] TestingUtilsSuite:
[info] - Comparing doubles using relative error. (107 milliseconds)
[info] - Comparing doubles using absolute error. (34 milliseconds)
[info] - Comparing vectors using relative error. (31 milliseconds)
[info] - Comparing vectors using absolute error. (6 milliseconds)
[info] Test org.apache.spark.network.shuffle.ExternalShuffleIntegrationSuite.testFetchWrongBlockId started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleIntegrationSuite.testFetchNonexistent started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleIntegrationSuite.testFetchOneSort started
[info] Test run finished: 0 failed, 0 ignored, 8 total, 4.406s
[info] Test run started
[info] Test org.apache.spark.network.shuffle.RetryingBlockFetcherSuite.testRetryAndUnrecoverable started
[info] - Comparing Matrices using absolute error. (548 milliseconds)
[info] - Comparing Matrices using relative error. (8 milliseconds)
[info] UtilsSuite:
[info] - EPSILON (6 milliseconds)
[info] MatricesSuite:
[info] - dense matrix construction (0 milliseconds)
[info] - dense matrix construction with wrong dimension (1 millisecond)
[info] Test org.apache.spark.network.shuffle.RetryingBlockFetcherSuite.testSingleIOExceptionOnFirst started
[info] Test org.apache.spark.network.shuffle.RetryingBlockFetcherSuite.testUnrecoverableFailure started
[info] Test org.apache.spark.network.shuffle.RetryingBlockFetcherSuite.testSingleIOExceptionOnSecond started
[info] Test org.apache.spark.network.shuffle.RetryingBlockFetcherSuite.testThreeIOExceptions started
[info] Test org.apache.spark.network.shuffle.RetryingBlockFetcherSuite.testNoFailures started
[info] Test org.apache.spark.network.shuffle.RetryingBlockFetcherSuite.testTwoIOExceptions started
[info] Test run finished: 0 failed, 0 ignored, 7 total, 0.348s
[info] Test run started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleSecuritySuite.testBadSecret started
[info] - sparse matrix construction (267 milliseconds)
[info] - sparse matrix construction with wrong number of elements (2 milliseconds)
[info] - index in matrices incorrect input (4 milliseconds)
[info] - equals (21 milliseconds)
[info] - matrix copies are deep copies (1 millisecond)
[info] - matrix indexing and updating (1 millisecond)
[info] - dense to dense (2 milliseconds)
[info] - dense to sparse (2 milliseconds)
[info] - sparse to sparse (5 milliseconds)
[info] - sparse to dense (3 milliseconds)
[info] - compressed dense (5 milliseconds)
[info] - compressed sparse (3 milliseconds)
[info] - map, update (2 milliseconds)
[info] - transpose (1 millisecond)
[info] - foreachActive (2 milliseconds)
[info] - horzcat, vertcat, eye, speye (12 milliseconds)
[info] - zeros (1 millisecond)
[info] - ones (3 milliseconds)
[info] - eye (1 millisecond)
[info] - rand (170 milliseconds)
[info] - randn (2 milliseconds)
[info] - diag (1 millisecond)
[info] - sprand (8 milliseconds)
[info] - sprandn (3 milliseconds)
[info] - toString (14 milliseconds)
[info] - numNonzeros and numActives (1 millisecond)
[info] - fromBreeze with sparse matrix (30 milliseconds)
Jul 11, 2020 6:54:31 AM com.github.fommil.netlib.BLAS <clinit>
WARNING: Failed to load implementation from: com.github.fommil.netlib.NativeSystemBLAS
Jul 11, 2020 6:54:31 AM com.github.fommil.netlib.BLAS <clinit>
WARNING: Failed to load implementation from: com.github.fommil.netlib.NativeRefBLAS
[info] - accuracy - String (2 seconds, 246 milliseconds)
[info] - row/col iterator (47 milliseconds)
[info] BreezeMatrixConversionSuite:
[info] - dense matrix to breeze (1 millisecond)
[info] - dense breeze matrix to matrix (2 milliseconds)
[info] - sparse matrix to breeze (2 milliseconds)
[info] - sparse breeze matrix to sparse matrix (1 millisecond)
[info] MultivariateGaussianSuite:
[info] Test org.apache.spark.network.shuffle.ExternalShuffleSecuritySuite.testBadAppId started
Jul 11, 2020 6:54:31 AM com.github.fommil.netlib.LAPACK <clinit>
WARNING: Failed to load implementation from: com.github.fommil.netlib.NativeSystemLAPACK
Jul 11, 2020 6:54:31 AM com.github.fommil.netlib.LAPACK <clinit>
WARNING: Failed to load implementation from: com.github.fommil.netlib.NativeRefLAPACK
[info] - univariate (464 milliseconds)
[info] - multivariate (11 milliseconds)
[info] - multivariate degenerate (1 millisecond)
[info] - SPARK-11302 (6 milliseconds)
[info] BreezeVectorConversionSuite:
[info] - dense to breeze (0 milliseconds)
[info] Test org.apache.spark.network.shuffle.ExternalShuffleSecuritySuite.testValid started
[info] - sparse to breeze (175 milliseconds)
[info] - dense breeze to vector (1 millisecond)
[info] - sparse breeze to vector (1 millisecond)
[info] - sparse breeze with partially-used arrays to vector (1 millisecond)
[info] VectorsSuite:
[info] - dense vector construction with varargs (1 millisecond)
[info] - dense vector construction from a double array (1 millisecond)
[info] - sparse vector construction (0 milliseconds)
[info] - sparse vector construction with unordered elements (2 milliseconds)
[info] - sparse vector construction with mismatched indices/values array (1 millisecond)
[info] - sparse vector construction with too many indices vs size (2 milliseconds)
[info] - sparse vector construction with negative indices (1 millisecond)
[info] - dense to array (0 milliseconds)
[info] - dense argmax (0 milliseconds)
[info] - sparse to array (0 milliseconds)
[info] - sparse argmax (1 millisecond)
[info] - vector equals (3 milliseconds)
[info] - vectors equals with explicit 0 (2 milliseconds)
[info] - indexing dense vectors (0 milliseconds)
[info] - indexing sparse vectors (1 millisecond)
[info] - zeros (1 millisecond)
[info] - Vector.copy (1 millisecond)
[info] - fromBreeze (1 millisecond)
[info] - sqdist (44 milliseconds)
[info] - foreachActive (3 milliseconds)
[info] - vector p-norm (5 milliseconds)
[info] - Vector numActive and numNonzeros (2 milliseconds)
[info] - Vector toSparse and toDense (1 millisecond)
[info] - Vector.compressed (1 millisecond)
[info] - SparseVector.slice (1 millisecond)
[info] - sparse vector only support non-negative length (2 milliseconds)
[info] Test org.apache.spark.network.shuffle.ExternalShuffleSecuritySuite.testEncryption started
[info] BLASSuite:
[info] - copy (4 milliseconds)
[info] - scal (1 millisecond)
[info] - axpy (2 milliseconds)
[info] - dot (3 milliseconds)
[info] - spr (3 milliseconds)
[info] - syr (5 milliseconds)
[info] - gemm (8 milliseconds)
[info] - gemv (6 milliseconds)
[info] - spmv (2 milliseconds)
[info] Test run finished: 0 failed, 0 ignored, 4 total, 1.574s
[info] Test run started
[info] Test org.apache.spark.network.sasl.SaslIntegrationSuite.testGoodClient started
[info] Test org.apache.spark.network.sasl.SaslIntegrationSuite.testNoSaslClient started
[info] Test org.apache.spark.network.sasl.SaslIntegrationSuite.testNoSaslServer started
[info] Test org.apache.spark.network.sasl.SaslIntegrationSuite.testBadClient started
[info] - mergeInPlace - String (1 second, 352 milliseconds)
[info] Test run finished: 0 failed, 0 ignored, 4 total, 0.212s
[info] Test run started
[info] Test org.apache.spark.network.sasl.ShuffleSecretManagerSuite.testMultipleRegisters started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.002s
[info] Test run started
[info] Test org.apache.spark.network.shuffle.NonShuffleFilesCleanupSuite.cleanupUsesExecutorWithShuffleFiles started
[info] Test org.apache.spark.network.shuffle.NonShuffleFilesCleanupSuite.cleanupOnlyRegisteredExecutorWithShuffleFiles started
[info] Test org.apache.spark.network.shuffle.NonShuffleFilesCleanupSuite.cleanupOnRemovedExecutorWithShuffleFiles started
[info] Test org.apache.spark.network.shuffle.NonShuffleFilesCleanupSuite.cleanupOnlyRemovedExecutorWithoutShuffleFiles started
[info] Test org.apache.spark.network.shuffle.NonShuffleFilesCleanupSuite.cleanupOnRemovedExecutorWithoutShuffleFiles started
[info] Test org.apache.spark.network.shuffle.NonShuffleFilesCleanupSuite.cleanupOnlyRemovedExecutorWithShuffleFiles started
[info] Test org.apache.spark.network.shuffle.NonShuffleFilesCleanupSuite.cleanupOnlyRegisteredExecutorWithoutShuffleFiles started
[info] Test org.apache.spark.network.shuffle.NonShuffleFilesCleanupSuite.cleanupUsesExecutorWithoutShuffleFiles started
[info] Test run finished: 0 failed, 0 ignored, 8 total, 0.231s
[info] Test run started
[info] Test org.apache.spark.network.shuffle.AppIsolationSuite.testSaslAppIsolation started
[info] Test org.apache.spark.network.shuffle.AppIsolationSuite.testAuthEngineAppIsolation started
[info] UTF8StringPropertyCheckSuite:
[info] Test run finished: 0 failed, 0 ignored, 2 total, 0.721s
[info] Test run started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleBlockResolverSuite.testNormalizeAndInternPathname started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleBlockResolverSuite.testSortShuffleBlocks started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleBlockResolverSuite.testBadRequests started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleBlockResolverSuite.jsonSerializationOfExecutorRegistration started
[info] - toString (130 milliseconds)
[info] - numChars (12 milliseconds)
[info] - startsWith (51 milliseconds)
[info] - endsWith (23 milliseconds)
[info] - toUpperCase (9 milliseconds)
[info] - toLowerCase (8 milliseconds)
[info] Test run finished: 0 failed, 0 ignored, 4 total, 0.221s
[info] - compare (95 milliseconds)
[info] Test run started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleBlockHandlerSuite.testOpenShuffleBlocks started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleBlockHandlerSuite.testRegisterExecutor started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleBlockHandlerSuite.testBadMessages started
[info] Test run finished: 0 failed, 0 ignored, 3 total, 0.024s
[info] Test run started
[info] Test org.apache.spark.network.shuffle.OneForOneBlockFetcherSuite.testEmptyBlockFetch started
[info] Test org.apache.spark.network.shuffle.OneForOneBlockFetcherSuite.testFailure started
[info] Test org.apache.spark.network.shuffle.OneForOneBlockFetcherSuite.testFailureAndSuccess started
[info] Test org.apache.spark.network.shuffle.OneForOneBlockFetcherSuite.testFetchThree started
[info] Test org.apache.spark.network.shuffle.OneForOneBlockFetcherSuite.testFetchOne started
[info] Test run finished: 0 failed, 0 ignored, 5 total, 0.015s
[info] Test run started
[info] Test org.apache.spark.network.shuffle.BlockTransferMessagesSuite.serializeOpenShuffleBlocks started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.001s
[info] Test run started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleCleanupSuite.cleanupOnlyRemovedApp started
[info] - substring (160 milliseconds)
[info] - contains (22 milliseconds)
[info] - trim, trimLeft, trimRight (133 milliseconds)
[info] - reverse (8 milliseconds)
[info] - indexOf (109 milliseconds)
[info] - repeat (11 milliseconds)
[info] - lpad, rpad (17 milliseconds)
[info] Test org.apache.spark.network.RequestTimeoutIntegrationSuite.timeoutCleanlyClosesClient started
[info] - concat (143 milliseconds)
[info] - concatWs (57 milliseconds)
[info] - split !!! IGNORED !!!
[info] - levenshteinDistance (12 milliseconds)
[info] - hashCode (3 milliseconds)
[info] - equals (4 milliseconds)
[info] Test org.apache.spark.network.shuffle.ExternalShuffleCleanupSuite.cleanupUsesExecutor started
[info] Test run started
[info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.addTest started
[info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.fromStringTest started
[info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.equalsTest started
[info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.fromYearMonthStringTest started
[info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.toStringTest started
[info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.subtractTest started
[info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.fromCaseInsensitiveStringTest started
[info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.fromSingleUnitStringTest started
[info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.fromDayTimeStringTest started
[info] Test run finished: 0 failed, 0 ignored, 9 total, 0.018s
[info] Test run started
[info] Test org.apache.spark.unsafe.array.LongArraySuite.basicTest started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.001s
[info] Test run started
[info] Test org.apache.spark.unsafe.PlatformUtilSuite.freeingOnHeapMemoryBlockResetsBaseObjectAndOffset started
[info] Test org.apache.spark.unsafe.PlatformUtilSuite.overlappingCopyMemory started
[info] Test org.apache.spark.unsafe.PlatformUtilSuite.memoryDebugFillEnabledInTest started
[info] Test org.apache.spark.unsafe.PlatformUtilSuite.offHeapMemoryAllocatorThrowsAssertionErrorOnDoubleFree started
[info] Test org.apache.spark.unsafe.PlatformUtilSuite.heapMemoryReuse started
[info] Test org.apache.spark.unsafe.PlatformUtilSuite.onHeapMemoryAllocatorPoolingReUsesLongArrays started
[info] Test org.apache.spark.unsafe.PlatformUtilSuite.onHeapMemoryAllocatorThrowsAssertionErrorOnDoubleFree started
[info] Test org.apache.spark.unsafe.PlatformUtilSuite.freeingOffHeapMemoryBlockResetsOffset started
[info] Test run finished: 0 failed, 0 ignored, 8 total, 0.062s
[info] Test run started
[info] Test org.apache.spark.unsafe.hash.Murmur3_x86_32Suite.testKnownLongInputs started
[info] Test org.apache.spark.unsafe.hash.Murmur3_x86_32Suite.testKnownIntegerInputs started
[info] Test org.apache.spark.unsafe.hash.Murmur3_x86_32Suite.randomizedStressTest started
[info] Test org.apache.spark.unsafe.hash.Murmur3_x86_32Suite.testKnownBytesInputs started
[info] Test org.apache.spark.unsafe.hash.Murmur3_x86_32Suite.randomizedStressTestPaddedStrings started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleCleanupSuite.noCleanupAndCleanup started
[info] Test org.apache.spark.unsafe.hash.Murmur3_x86_32Suite.randomizedStressTestBytes started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleCleanupSuite.cleanupMultipleExecutors started
[info] Test run finished: 0 failed, 0 ignored, 6 total, 0.384s
[info] Test run started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.titleCase started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.concatTest started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.soundex started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.basicTest started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.writeToOutputStreamUnderflow started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.testToShort started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.startsWith started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.compareTo started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.levenshteinDistance started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.writeToOutputStreamOverflow started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.writeToOutputStreamIntArray started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.upperAndLower started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.testToInt started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.createBlankString started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.prefix started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.concatWsTest started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.repeat started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.contains started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.skipWrongFirstByte started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.emptyStringTest started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.writeToOutputStreamSlice started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.trimBothWithTrimString started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.substringSQL started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.substring_index started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.pad started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.split started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.trims started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.trimRightWithTrimString started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.findInSet started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.substring started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.translate started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.reverse started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.trimLeftWithTrimString started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.endsWith started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.testToByte started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.testToLong started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.writeToOutputStream started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.indexOf started
[info] Test run finished: 0 failed, 0 ignored, 38 total, 0.148s
[info] ScalaTest
[info] Run completed in 19 seconds, 199 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] Passed: Total 44, Failed 0, Errors 0, Passed 44
[info] Test run finished: 0 failed, 0 ignored, 4 total, 1.561s
[info] JdbcRDDSuite:
[info] DistributedSuite:
[info] - accuracy - Byte array (4 seconds, 907 milliseconds)
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:128: value ENABLE_JOB_SUMMARY in object ParquetOutputFormat is deprecated: see corresponding Javadoc for more information.
[warn]       && conf.get(ParquetOutputFormat.ENABLE_JOB_SUMMARY) == null) {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:360: class ParquetInputSplit in package hadoop is deprecated: see corresponding Javadoc for more information.
[warn]         new org.apache.parquet.hadoop.ParquetInputSplit(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:371: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]         ParquetFileReader.readFooter(sharedConf, filePath, SKIP_ROW_GROUPS).getFileMetaData
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:544: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]           ParquetFileReader.readFooter(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[info] - mergeInPlace - Byte array (2 seconds, 604 milliseconds)
[info] - incompatible merge (2 milliseconds)
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:119: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]     consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:139: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:187: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]       consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:217: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]       consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:293: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]           consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaDataConsumer.scala:470: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     val p = consumer.poll(pollTimeoutMs)
[warn] 
[info] FlumePollingStreamSuite:
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:138: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn] object OneHotEncoder extends DefaultParamsReadable[OneHotEncoder] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:141: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]   override def load(path: String): OneHotEncoder = super.load(path)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:53: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path
[warn]       if (addedClasspath != "") {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:54: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path
[warn]         settings.classpath append addedClasspath
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] - basic functionality (4 seconds, 101 milliseconds)
[info] Test org.apache.spark.network.RequestTimeoutIntegrationSuite.timeoutInactiveRequests started
[info] - large id overflow (783 milliseconds)
[info] SparkUncaughtExceptionHandlerSuite:
[info] - task throws not serializable exception (8 seconds, 116 milliseconds)
[info] - local-cluster format (6 milliseconds)
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/examples/target/scala-2.11/jars/spark-examples_2.11-2.4.7-SNAPSHOT.jar ...
[info] Done packaging.
[info] ScalaTest
[info] Run completed in 23 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] - SPARK-30310: Test uncaught RuntimeException, exitOnUncaughtException = true (2 seconds, 27 milliseconds)
[info] - SPARK-30310: Test uncaught RuntimeException, exitOnUncaughtException = false (1 second, 909 milliseconds)
[info] - simple groupByKey (5 seconds, 236 milliseconds)
[info] - SPARK-30310: Test uncaught OutOfMemoryError, exitOnUncaughtException = true (1 second, 878 milliseconds)
[info] - SPARK-30310: Test uncaught OutOfMemoryError, exitOnUncaughtException = false (1 second, 655 milliseconds)
[info] - SPARK-30310: Test uncaught SparkFatalException(RuntimeException), exitOnUncaughtException = true (1 second, 790 milliseconds)
[info] Test run finished: 0 failed, 0 ignored, 3 total, 33.376s
[info] Test run started
[info] Test org.apache.spark.network.ProtocolSuite.responses started
[info] Test org.apache.spark.network.ProtocolSuite.requests started
[info] Test run finished: 0 failed, 0 ignored, 2 total, 0.058s
[info] Test run started
[info] Test org.apache.spark.network.TransportClientFactorySuite.reuseClientsUpToConfigVariable started
[info] Test org.apache.spark.network.TransportClientFactorySuite.reuseClientsUpToConfigVariableConcurrent started
[info] - groupByKey where map output sizes exceed maxMbInFlight (6 seconds, 123 milliseconds)
[info] - SPARK-30310: Test uncaught SparkFatalException(RuntimeException), exitOnUncaughtException = false (2 seconds, 965 milliseconds)
[info] Test org.apache.spark.network.TransportClientFactorySuite.closeFactoryBeforeCreateClient started
[info] Test org.apache.spark.network.TransportClientFactorySuite.closeBlockClientsWithFactory started
[info] Test org.apache.spark.network.TransportClientFactorySuite.neverReturnInactiveClients started
[info] Test org.apache.spark.network.TransportClientFactorySuite.closeIdleConnectionForRequestTimeOut started
[info] - flume polling test (14 seconds, 528 milliseconds)
[info] Test org.apache.spark.network.TransportClientFactorySuite.returnDifferentClientsForDifferentServers started
[info] Test run finished: 0 failed, 0 ignored, 7 total, 4.823s
[info] Test run started
[info] Test org.apache.spark.network.crypto.AuthIntegrationSuite.testSaslClientFallback started
[info] Test org.apache.spark.network.crypto.AuthIntegrationSuite.testSaslServerFallback started
[info] Test org.apache.spark.network.crypto.AuthIntegrationSuite.testAuthReplay started
[info] - SPARK-30310: Test uncaught SparkFatalException(OutOfMemoryError), exitOnUncaughtException = true (3 seconds, 421 milliseconds)
[info] - SPARK-30310: Test uncaught SparkFatalException(OutOfMemoryError), exitOnUncaughtException = false (3 seconds, 604 milliseconds)
[info] PipedRDDSuite:
[info] - accumulators (7 seconds, 382 milliseconds)
[info] - basic pipe (139 milliseconds)
[info] - basic pipe with tokenization (86 milliseconds)
[info] - failure in iterating over pipe input (98 milliseconds)
[info] - stdin writer thread should be exited when task is finished (161 milliseconds)
[info] Test org.apache.spark.network.crypto.AuthIntegrationSuite.testNewAuth started
[info] Test org.apache.spark.network.crypto.AuthIntegrationSuite.testLargeMessageEncryption started
[info] Test org.apache.spark.network.crypto.AuthIntegrationSuite.testAuthFailure started
[info] Test run finished: 0 failed, 0 ignored, 6 total, 5.376s
[info] Test run started
[info] Test org.apache.spark.network.RpcIntegrationSuite.sendRpcWithStreamConcurrently started
[info] Test org.apache.spark.network.RpcIntegrationSuite.sendOneWayMessage started
[info] Test org.apache.spark.network.RpcIntegrationSuite.singleRPC started
[info] Test org.apache.spark.network.RpcIntegrationSuite.throwErrorRPC started
[info] - advanced pipe (602 milliseconds)
[info] Test org.apache.spark.network.RpcIntegrationSuite.doubleTrouble started
[info] Test org.apache.spark.network.RpcIntegrationSuite.doubleRPC started
[info] Test org.apache.spark.network.RpcIntegrationSuite.returnErrorRPC started
[info] Test org.apache.spark.network.RpcIntegrationSuite.sendRpcWithStreamFailures started
[info] Test org.apache.spark.network.RpcIntegrationSuite.sendRpcWithStreamOneAtATime started
[info] - pipe with empty partition (227 milliseconds)
[info] Test org.apache.spark.network.RpcIntegrationSuite.sendSuccessAndFailure started
[info] - pipe with env variable (70 milliseconds)
[info] Test run finished: 0 failed, 0 ignored, 10 total, 0.734s
[info] Test run started
[info] Test org.apache.spark.network.crypto.TransportCipherSuite.testBufferNotLeaksOnInternalError started
[info] - pipe with process which cannot be launched due to bad command (39 milliseconds)
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.025s
[info] Test run started
[info] Test org.apache.spark.network.TransportResponseHandlerSuite.failOutstandingStreamCallbackOnException started
[info] Test org.apache.spark.network.TransportResponseHandlerSuite.failOutstandingStreamCallbackOnClose started
[info] Test org.apache.spark.network.TransportResponseHandlerSuite.testActiveStreams started
[info] Test org.apache.spark.network.TransportResponseHandlerSuite.handleSuccessfulFetch started
cat: nonexistent_file: No such file or directory
cat: nonexistent_file: No such file or directory
[info] Test org.apache.spark.network.TransportResponseHandlerSuite.handleFailedRPC started
[info] Test org.apache.spark.network.TransportResponseHandlerSuite.handleFailedFetch started
[info] Test org.apache.spark.network.TransportResponseHandlerSuite.handleSuccessfulRPC started
[info] Test org.apache.spark.network.TransportResponseHandlerSuite.clearAllOutstandingRequests started
[info] Test run finished: 0 failed, 0 ignored, 8 total, 0.033s
[info] Test run started
[info] Test org.apache.spark.network.util.TransportFrameDecoderSuite.testEmptyFrame started
[info] Test org.apache.spark.network.util.TransportFrameDecoderSuite.testNegativeFrameSize started
[info] Test org.apache.spark.network.util.TransportFrameDecoderSuite.testSplitLengthField started
[info] Test org.apache.spark.network.util.TransportFrameDecoderSuite.testFrameDecoding started
[info] - pipe with process which is launched but fails with non-zero exit status (64 milliseconds)
[info] Test org.apache.spark.network.util.TransportFrameDecoderSuite.testInterception started
[info] Test org.apache.spark.network.util.TransportFrameDecoderSuite.testRetainedFrames started
[info] Test run finished: 0 failed, 0 ignored, 6 total, 0.074s
[info] Test run started
[info] Test org.apache.spark.network.protocol.MessageWithHeaderSuite.testDeallocateReleasesManagedBuffer started
[info] Test org.apache.spark.network.protocol.MessageWithHeaderSuite.testByteBufBody started
[info] Test org.apache.spark.network.protocol.MessageWithHeaderSuite.testShortWrite started
[info] Test org.apache.spark.network.protocol.MessageWithHeaderSuite.testCompositeByteBufBodySingleBuffer started
[info] Test org.apache.spark.network.protocol.MessageWithHeaderSuite.testCompositeByteBufBodyMultipleBuffers started
[info] Test org.apache.spark.network.protocol.MessageWithHeaderSuite.testSingleWrite started
[info] Test run finished: 0 failed, 0 ignored, 6 total, 0.003s
[info] Test run started
[info] Test org.apache.spark.network.sasl.SparkSaslSuite.testNonMatching started
[info] Test org.apache.spark.network.sasl.SparkSaslSuite.testSaslAuthentication started
[info] Test org.apache.spark.network.sasl.SparkSaslSuite.testEncryptedMessageChunking started
[info] - basic pipe with separate working directory (134 milliseconds)
[info] Test org.apache.spark.network.sasl.SparkSaslSuite.testServerAlwaysEncrypt started
[info] Test org.apache.spark.network.sasl.SparkSaslSuite.testDataEncryptionIsActuallyEnabled started
[info] Test org.apache.spark.network.sasl.SparkSaslSuite.testFileRegionEncryption started
[info] Test org.apache.spark.network.sasl.SparkSaslSuite.testSaslEncryption started
[info] - test pipe exports map_input_file (362 milliseconds)
[info] - test pipe exports mapreduce_map_input_file (99 milliseconds)
[info] Test org.apache.spark.network.sasl.SparkSaslSuite.testDelegates started
[info] Test org.apache.spark.network.sasl.SparkSaslSuite.testEncryptedMessage started
[info] Test org.apache.spark.network.sasl.SparkSaslSuite.testMatching started
[info] Test org.apache.spark.network.sasl.SparkSaslSuite.testRpcHandlerDelegate started
[info] Test run finished: 0 failed, 0 ignored, 11 total, 0.569s
[info] Test run started
[info] Test org.apache.spark.network.util.CryptoUtilsSuite.testConfConversion started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.001s
[info] Test run started
[info] Test org.apache.spark.network.TransportRequestHandlerSuite.handleFetchRequestAndStreamRequest started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.016s
[info] Test run started
[info] Test org.apache.spark.network.crypto.AuthMessagesSuite.testServerResponse started
[info] Test org.apache.spark.network.crypto.AuthMessagesSuite.testClientChallenge started
[info] Test run finished: 0 failed, 0 ignored, 2 total, 0.002s
[info] AccumulatorV2Suite:
[info] - LongAccumulator add/avg/sum/count/isZero (1 millisecond)
[info] - DoubleAccumulator add/avg/sum/count/isZero (1 millisecond)
[info] - ListAccumulator (1 millisecond)
[info] - LegacyAccumulatorWrapper (3 milliseconds)
[info] - LegacyAccumulatorWrapper with AccumulatorParam that has no equals/hashCode (5 milliseconds)
[info] Test run started
[info] Test org.apache.spark.network.util.NettyMemoryMetricsSuite.testGeneralNettyMemoryMetrics started
[info] FileSuite:
[info] Test org.apache.spark.network.util.NettyMemoryMetricsSuite.testAdditionalMetrics started
[info] Test run finished: 0 failed, 0 ignored, 2 total, 0.212s
[info] Test run started
[info] Test org.apache.spark.network.ChunkFetchIntegrationSuite.fetchNonExistentChunk started
[info] Test org.apache.spark.network.ChunkFetchIntegrationSuite.fetchFileChunk started
[info] Test org.apache.spark.network.ChunkFetchIntegrationSuite.fetchBothChunks started
[info] - text files (752 milliseconds)
[info] Test org.apache.spark.network.ChunkFetchIntegrationSuite.fetchChunkAndNonExistent started
[info] Test org.apache.spark.network.ChunkFetchIntegrationSuite.fetchBufferChunk started
[info] Test run finished: 0 failed, 0 ignored, 5 total, 0.692s
[info] Test run started
[info] Test org.apache.spark.network.StreamSuite.testSingleStream started
[info] Test org.apache.spark.network.StreamSuite.testMultipleStreams started
[info] Test org.apache.spark.network.StreamSuite.testConcurrentStreams started
[info] Test org.apache.spark.network.StreamSuite.testZeroLengthStream started
[info] Test run finished: 0 failed, 0 ignored, 4 total, 0.332s
[info] - text files (compressed) (933 milliseconds)
[info] - broadcast variables (4 seconds, 430 milliseconds)
[info] - SequenceFiles (435 milliseconds)
[info] - SequenceFile (compressed) (544 milliseconds)
[info] ReliableKafkaStreamSuite:
[info] - SequenceFile with writable key (448 milliseconds)
[info] - SequenceFile with writable value (396 milliseconds)
[info] - SequenceFile with writable key and value (311 milliseconds)
[info] - implicit conversions in reading SequenceFiles (602 milliseconds)
[info] - object files of ints (291 milliseconds)
[info] - object files of complex types (337 milliseconds)
[info] - repeatedly failing task (4 seconds, 31 milliseconds)
[info] - flume polling test multiple hosts (14 seconds, 529 milliseconds)
[info] - object files of classes from a JAR (1 second, 439 milliseconds)
[info] FlumeStreamSuite:
[info] - write SequenceFile using new Hadoop API (367 milliseconds)
[info] - read SequenceFile using new Hadoop API (285 milliseconds)
[info] - binary file input as byte array (258 milliseconds)
[info] - portabledatastream caching tests (284 milliseconds)
[info] - flume input stream (1 second, 266 milliseconds)
[info] - portabledatastream persist disk storage (427 milliseconds)
[info] - portabledatastream flatmap tests (2 seconds, 108 milliseconds)
[info] - SPARK-22357 test binaryFiles minPartitions (576 milliseconds)
[info] - minimum split size per node and per rack should be less than or equal to maxSplitSize (234 milliseconds)
[info] - flume input compressed stream (3 seconds, 196 milliseconds)
[info] Test run started
[info] Test org.apache.spark.streaming.flume.JavaFlumeStreamSuite.testFlumeStream started
[info] - fixed record length binary file as byte array (202 milliseconds)
[info] - negative binary record length should raise an exception (196 milliseconds)
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.353s
[info] Test run started
[info] Test org.apache.spark.streaming.flume.JavaFlumePollingStreamSuite.testFlumeStream started
[info] - file caching (196 milliseconds)
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.234s
[info] - prevent user from overwriting the empty directory (old Hadoop API) (110 milliseconds)
[info] - prevent user from overwriting the non-empty directory (old Hadoop API) (195 milliseconds)
[info] - allow user to disable the output directory existence checking (old Hadoop API) (766 milliseconds)
[info] - prevent user from overwriting the empty directory (new Hadoop API) (114 milliseconds)
[info] KafkaRDDSuite:
[info] - Reliable Kafka input stream with single topic (9 seconds, 177 milliseconds)
[info] - prevent user from overwriting the non-empty directory (new Hadoop API) (258 milliseconds)
[info] - allow user to disable the output directory existence checking (new Hadoop API (287 milliseconds)
[info] - save Hadoop Dataset through old Hadoop API (199 milliseconds)
[info] - save Hadoop Dataset through new Hadoop API (191 milliseconds)
[info] - Get input files via old Hadoop API (311 milliseconds)
[info] - Get input files via new Hadoop API (318 milliseconds)
[info] - spark.files.ignoreCorruptFiles should work both HadoopRDD and NewHadoopRDD (352 milliseconds)
[info] - repeatedly failing task that crashes JVM (11 seconds, 247 milliseconds)
[info] - spark.hadoopRDD.ignoreEmptySplits work correctly (old Hadoop API) (530 milliseconds)
[info] - Reliable Kafka input stream with multiple topics (1 second, 534 milliseconds)
[info] - spark.hadoopRDD.ignoreEmptySplits work correctly (new Hadoop API) (590 milliseconds)
[info] - spark.files.ignoreMissingFiles should work both HadoopRDD and NewHadoopRDD (563 milliseconds)
[info] LogPageSuite:
[info] - get logs simple (269 milliseconds)
[info] PartiallyUnrolledIteratorSuite:
[info] KafkaStreamSuite:
[info] - join two iterators (76 milliseconds)
[info] HistoryServerDiskManagerSuite:
[info] - leasing space (169 milliseconds)
[info] - tracking active stores (33 milliseconds)
[info] - approximate size heuristic (1 millisecond)
[info] - SPARK-32024: update ApplicationStoreInfo.size during initializing (52 milliseconds)
[info] ExternalShuffleServiceSuite:
[info] - groupByKey without compression (325 milliseconds)
[info] - Kafka input stream (2 seconds, 20 milliseconds)
[info] DirectKafkaStreamSuite:
[info] - basic stream receiving with multiple topics and smallest starting offset (1 second, 264 milliseconds)
[info] - receiving from largest starting offset (591 milliseconds)
[info] - creating stream by offset (773 milliseconds)
[info] - shuffle non-zero block size (5 seconds, 743 milliseconds)
[info] - offset recovery (3 seconds, 649 milliseconds)
[info] - Direct Kafka stream report input information (479 milliseconds)
[info] - maxMessagesPerPartition with backpressure disabled (91 milliseconds)
[info] - repeatedly failing task that crashes JVM with a zero exit code (SPARK-16925) (12 seconds, 325 milliseconds)
[info] - maxMessagesPerPartition with no lag (96 milliseconds)
[info] - maxMessagesPerPartition respects max rate (74 milliseconds)
[info] - using rate controller (1 second, 310 milliseconds)
[info] - shuffle serializer (4 seconds, 453 milliseconds)
[info] - use backpressure.initialRate with backpressure (425 milliseconds)
[info] - backpressure.initialRate should honor maxRatePerPartition (357 milliseconds)
[info] - maxMessagesPerPartition with zero offset and rate equal to one (84 milliseconds)
[info] KafkaRDDSuite:
[info] - basic usage (310 milliseconds)
[info] - iterator boundary conditions (494 milliseconds)
[info] KafkaClusterSuite:
[info] - caching (encryption = off) (5 seconds, 560 milliseconds)
[info] - metadata apis (193 milliseconds)
[info] - leader offset apis (5 milliseconds)
[info] - consumer offset apis (100 milliseconds)
[info] Test run started
[info] Test org.apache.spark.streaming.kafka.JavaKafkaStreamSuite.testKafkaStream started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 2.782s
[info] Test run started
[info] Test org.apache.spark.streaming.kafka.JavaKafkaRDDSuite.testKafkaRDD started
[info] - zero sized blocks (7 seconds, 939 milliseconds)
[info] Test run finished: 0 failed, 0 ignored, 1 total, 1.608s
[info] Test run started
[info] Test org.apache.spark.streaming.kafka.JavaDirectKafkaStreamSuite.testKafkaStream started
[info] - caching (encryption = on) (5 seconds, 722 milliseconds)
[info] Test run finished: 0 failed, 0 ignored, 1 total, 1.719s
[info] LabelPropagationSuite:
[info] - caching on disk (encryption = off) (6 seconds, 62 milliseconds)
[info] - zero sized blocks without kryo (8 seconds, 224 milliseconds)
[info] - caching on disk (encryption = on) (5 seconds, 836 milliseconds)
[info] - shuffle on mutable pairs (5 seconds, 563 milliseconds)
[info] - Label Propagation (12 seconds, 848 milliseconds)
[info] BytecodeUtilsSuite:
[info] - closure invokes a method (13 milliseconds)
[info] - closure inside a closure invokes a method (5 milliseconds)
[info] - closure inside a closure inside a closure invokes a method (5 milliseconds)
[info] - closure calling a function that invokes a method (4 milliseconds)
[info] - closure calling a function that invokes a method which uses another closure (7 milliseconds)
[info] - nested closure (4 milliseconds)
[info] PregelSuite:
[info] - 1 iteration (1 second, 203 milliseconds)
[info] - sorting on mutable pairs (7 seconds, 445 milliseconds)
[info] - caching in memory, replicated (encryption = off) (7 seconds, 900 milliseconds)
[info] - chain propagation (4 seconds, 269 milliseconds)
[info] PeriodicGraphCheckpointerSuite:
[info] - Persisting (203 milliseconds)
[info] - Checkpointing (3 seconds, 660 milliseconds)
[info] ConnectedComponentsSuite:
[info] - cogroup using mutable pairs (6 seconds, 712 milliseconds)
[info] - caching in memory, replicated (encryption = off) (with replication as stream) (7 seconds, 755 milliseconds)
[info] - Grid Connected Components (5 seconds, 768 milliseconds)
[info] - subtract mutable pairs (6 seconds, 124 milliseconds)
[info] - caching in memory, replicated (encryption = on) (7 seconds, 34 milliseconds)
[info] - Reverse Grid Connected Components (5 seconds, 213 milliseconds)
[info] - sort with Java non serializable class - Kryo (7 seconds, 407 milliseconds)
[info] - caching in memory, replicated (encryption = on) (with replication as stream) (8 seconds, 414 milliseconds)
[info] - Chain Connected Components (8 seconds, 371 milliseconds)
[info] - sort with Java non serializable class - Java (5 seconds, 68 milliseconds)
[info] - shuffle with different compression settings (SPARK-3426) (663 milliseconds)
[info] - [SPARK-4085] rerun map stage if reduce stage cannot find its local shuffle file (603 milliseconds)
[info] - cannot find its local shuffle file if no execution of the stage and rerun shuffle (190 milliseconds)
[info] - metrics for shuffle without aggregation (686 milliseconds)
[info] - metrics for shuffle with aggregation (1 second, 64 milliseconds)
[info] - caching in memory, serialized, replicated (encryption = off) (5 seconds, 821 milliseconds)
[info] - multiple simultaneous attempts for one task (SPARK-8029) (69 milliseconds)
[info] - Reverse Chain Connected Components (6 seconds, 432 milliseconds)
[info] - Connected Components on a Toy Connected Graph (1 second, 36 milliseconds)
[info] VertexRDDSuite:
[info] - filter (466 milliseconds)
[info] - mapValues (661 milliseconds)
[info] - minus (365 milliseconds)
[info] - minus with RDD[(VertexId, VD)] (499 milliseconds)
[info] - minus with non-equal number of partitions (969 milliseconds)
[info] - caching in memory, serialized, replicated (encryption = off) (with replication as stream) (5 seconds, 299 milliseconds)
[info] - using external shuffle service (5 seconds, 476 milliseconds)
[info] ConfigEntrySuite:
[info] - conf entry: int (1 millisecond)
[info] - conf entry: long (0 milliseconds)
[info] - conf entry: double (1 millisecond)
[info] - conf entry: boolean (0 milliseconds)
[info] - conf entry: optional (0 milliseconds)
[info] - conf entry: fallback (1 millisecond)
[info] - conf entry: time (1 millisecond)
[info] - conf entry: bytes (1 millisecond)
[info] - conf entry: regex (2 milliseconds)
[info] - conf entry: string seq (1 millisecond)
[info] - conf entry: int seq (2 milliseconds)
[info] - conf entry: transformation (1 millisecond)
[info] - conf entry: checkValue() (1 millisecond)
[info] - conf entry: valid values check (1 millisecond)
[info] - conf entry: conversion error (2 milliseconds)
[info] - default value handling is null-safe (1 millisecond)
[info] - variable expansion of spark config entries (8 milliseconds)
[info] - conf entry : default function (1 millisecond)
[info] - conf entry: alternative keys (1 millisecond)
[info] - onCreate (2 milliseconds)
[info] InputOutputMetricsSuite:
[info] - diff (556 milliseconds)
[info] - input metrics for old hadoop with coalesce (182 milliseconds)
[info] - input metrics with cache and coalesce (261 milliseconds)
[info] - input metrics for new Hadoop API with coalesce (110 milliseconds)
[info] - input metrics when reading text file (45 milliseconds)
[info] - diff with RDD[(VertexId, VD)] (619 milliseconds)
[info] - input metrics on records read - simple (46 milliseconds)
[info] - input metrics on records read - more stages (170 milliseconds)
[info] - input metrics on records - New Hadoop API (77 milliseconds)
[info] - diff vertices with non-equal number of partitions (447 milliseconds)
[info] - input metrics on records read with cache (152 milliseconds)
[info] - input read/write and shuffle read/write metrics all line up (263 milliseconds)
[info] - leftJoin (823 milliseconds)
[info] - input metrics with interleaved reads (586 milliseconds)
[info] - output metrics on records written (126 milliseconds)
[info] - leftJoin vertices with non-equal number of partitions (440 milliseconds)
[info] - output metrics on records written - new Hadoop API (139 milliseconds)
[info] - output metrics when writing text file (99 milliseconds)
[info] - input metrics with old CombineFileInputFormat (54 milliseconds)
[info] - input metrics with new CombineFileInputFormat (82 milliseconds)
[info] - input metrics with old Hadoop API in different thread (95 milliseconds)
[info] - input metrics with new Hadoop API in different thread (130 milliseconds)
[info] - innerJoin (697 milliseconds)
[info] AppStatusStoreSuite:
[info] - quantile calculation: 1 task (44 milliseconds)
[info] - quantile calculation: few tasks (5 milliseconds)
[info] - quantile calculation: more tasks (22 milliseconds)
[info] - quantile calculation: lots of tasks (135 milliseconds)
[info] - quantile calculation: custom quantiles (81 milliseconds)
[info] - innerJoin vertices with the non-equal number of partitions (426 milliseconds)
[info] - quantile cache (212 milliseconds)
[info] - SPARK-28638: only successful tasks have taskSummary when with in memory kvstore (3 milliseconds)
[info] - SPARK-28638: summary should contain successful tasks only when with in memory kvstore (19 milliseconds)
[info] CountEvaluatorSuite:
[info] - test count 0 (1 millisecond)
[info] - test count >= 1 (32 milliseconds)
[info] TaskResultGetterSuite:
[info] - handling results smaller than max RPC message size (84 milliseconds)
[info] - aggregateUsingIndex (603 milliseconds)
[info] - caching in memory, serialized, replicated (encryption = on) (4 seconds, 568 milliseconds)
[info] - handling results larger than max RPC message size (469 milliseconds)
[info] - mergeFunc (246 milliseconds)
[info] - handling total size of results larger than maxResultSize (164 milliseconds)
[info] - cache, getStorageLevel (122 milliseconds)
[info] - task retried if result missing from block manager (492 milliseconds)
[info] - failed task deserialized with the correct classloader (SPARK-11195) (399 milliseconds)
[info] - checkpoint (876 milliseconds)
[info] - task result size is set on the driver, not the executors (135 milliseconds)
Exception in thread "task-result-getter-0" java.lang.NoClassDefFoundError
	at org.apache.spark.scheduler.UndeserializableException.readObject(TaskResultGetterSuite.scala:304)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[info] - failed task is handled when error occurs deserializing the reason (129 milliseconds)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1170)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2178)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
	at org.apache.spark.ThrowableSerializationWrapper.readObject(TaskEndReason.scala:193)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1170)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2178)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
	at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
	at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
	at org.apache.spark.scheduler.TaskResultGetter$$anon$4$$anonfun$run$2.apply$mcV$sp(TaskResultGetter.scala:142)
	at org.apache.spark.scheduler.TaskResultGetter$$anon$4$$anonfun$run$2.apply(TaskResultGetter.scala:138)
	at org.apache.spark.scheduler.TaskResultGetter$$anon$4$$anonfun$run$2.apply(TaskResultGetter.scala:138)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1945)
	at org.apache.spark.scheduler.TaskResultGetter$$anon$4.run(TaskResultGetter.scala:138)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
[info] SerializerPropertiesSuite:
[info] - JavaSerializer does not support relocation (3 milliseconds)
[info] - KryoSerializer supports relocation when auto-reset is enabled (125 milliseconds)
[info] - KryoSerializer does not support relocation when auto-reset is disabled (20 milliseconds)
[info] DriverRunnerTest:
[info] - Process succeeds instantly (84 milliseconds)
[info] - count (539 milliseconds)
[info] EdgePartitionSuite:
[info] - reverse (9 milliseconds)
[info] - map (3 milliseconds)
[info] - Process failing several times and then succeeding (41 milliseconds)
[info] - filter (8 milliseconds)
[info] - groupEdges (2 milliseconds)
[info] - innerJoin (2 milliseconds)
[info] - isActive, numActives, replaceActives (1 millisecond)
[info] - tripletIterator (1 millisecond)
[info] - Process doesn't restart if not supervised (37 milliseconds)
[info] - serialization (31 milliseconds)
[info] EdgeSuite:
[info] - compare (1 millisecond)
[info] PageRankSuite:
[info] - Process doesn't restart if killed (35 milliseconds)
[info] - Reset of backoff counter (43 milliseconds)
[info] - Kill process finalized with state KILLED (51 milliseconds)
[info] - Finalized with state FINISHED (48 milliseconds)
[info] - Finalized with state FAILED (45 milliseconds)
[info] - Handle exception starting process (40 milliseconds)
[info] MapOutputTrackerSuite:
[info] - master start and stop (93 milliseconds)
[info] - master register shuffle and fetch (97 milliseconds)
[info] - master register and unregister shuffle (61 milliseconds)
[info] - master register shuffle and unregister map output and fetch (69 milliseconds)
[info] - remote fetch (155 milliseconds)
[info] - remote fetch below max RPC message size (122 milliseconds)
[info] - min broadcast size exceeds max RPC message size (37 milliseconds)
[info] - getLocationsWithLargestOutputs with multiple outputs in same machine (100 milliseconds)
[info] - Star PageRank (2 seconds, 129 milliseconds)
[info] - caching in memory, serialized, replicated (encryption = on) (with replication as stream) (4 seconds, 854 milliseconds)
[info] - Star PersonalPageRank (4 seconds, 114 milliseconds)
[info] - caching on disk, replicated (encryption = off) (5 seconds, 36 milliseconds)
[info] - caching on disk, replicated (encryption = off) (with replication as stream) (4 seconds, 495 milliseconds)
[info] - Grid PageRank (13 seconds, 706 milliseconds)
[info] - caching on disk, replicated (encryption = on) (7 seconds, 865 milliseconds)
[info] - Chain PageRank (3 seconds, 627 milliseconds)
[info] - remote fetch using broadcast (23 seconds, 970 milliseconds)
[info] - equally divide map statistics tasks (126 milliseconds)
[info] - zero-sized blocks should be excluded when getMapSizesByExecutorId (401 milliseconds)
[info] PythonBroadcastSuite:
[info] - PythonBroadcast can be serialized with Kryo (SPARK-4882) (23 milliseconds)
[info] ExecutorRunnerTest:
[info] - command includes appId (48 milliseconds)
[info] CompressionCodecSuite:
[info] - default compression codec (5 milliseconds)
[info] - lz4 compression codec (3 milliseconds)
[info] - lz4 compression codec short form (1 millisecond)
[info] - lz4 supports concatenation of serialized streams (2 milliseconds)
[info] - lzf compression codec (15 milliseconds)
[info] - lzf compression codec short form (18 milliseconds)
[info] - lzf supports concatenation of serialized streams (1 millisecond)
[info] - snappy compression codec (44 milliseconds)
[info] - snappy compression codec short form (2 milliseconds)
[info] - snappy supports concatenation of serialized streams (2 milliseconds)
[info] - zstd compression codec (20 milliseconds)
[info] - zstd compression codec short form (2 milliseconds)
[info] - zstd supports concatenation of serialized zstd (2 milliseconds)
[info] - bad compression codec (2 milliseconds)
[info] MetricsSystemSuite:
[info] - MetricsSystem with default config (3 milliseconds)
[info] - MetricsSystem with sources add (8 milliseconds)
[info] - MetricsSystem with Driver instance (2 milliseconds)
[info] - MetricsSystem with Driver instance and spark.app.id is not set (3 milliseconds)
[info] - MetricsSystem with Driver instance and spark.executor.id is not set (3 milliseconds)
[info] - MetricsSystem with Executor instance (2 milliseconds)
[info] - MetricsSystem with Executor instance and spark.app.id is not set (3 milliseconds)
[info] - MetricsSystem with Executor instance and spark.executor.id is not set (2 milliseconds)
[info] - MetricsSystem with instance which is neither Driver nor Executor (2 milliseconds)
[info] - MetricsSystem with Executor instance, with custom namespace (2 milliseconds)
[info] - MetricsSystem with Executor instance, custom namespace which is not set (2 milliseconds)
[info] - MetricsSystem with Executor instance, custom namespace, spark.executor.id not set (25 milliseconds)
[info] - MetricsSystem with non-driver, non-executor instance with custom namespace (5 milliseconds)
[info] ConfigReaderSuite:
[info] - variable expansion (3 milliseconds)
[info] - circular references (2 milliseconds)
[info] - spark conf provider filters config keys (1 millisecond)
[info] DoubleRDDSuite:
[info] - caching on disk, replicated (encryption = on) (with replication as stream) (6 seconds, 221 milliseconds)
[info] - sum (135 milliseconds)
[info] - WorksOnEmpty (72 milliseconds)
[info] - WorksWithOutOfRangeWithOneBucket (67 milliseconds)
[info] - WorksInRangeWithOneBucket (57 milliseconds)
[info] - WorksInRangeWithOneBucketExactMatch (66 milliseconds)
[info] - WorksWithOutOfRangeWithTwoBuckets (61 milliseconds)
[info] - WorksWithOutOfRangeWithTwoUnEvenBuckets (43 milliseconds)
[info] - WorksInRangeWithTwoBuckets (67 milliseconds)
[info] - WorksInRangeWithTwoBucketsAndNaN (117 milliseconds)
[info] - WorksInRangeWithTwoUnevenBuckets (38 milliseconds)
[info] - WorksMixedRangeWithTwoUnevenBuckets (22 milliseconds)
[info] - WorksMixedRangeWithFourUnevenBuckets (42 milliseconds)
[info] - WorksMixedRangeWithUnevenBucketsAndNaN (19 milliseconds)
[info] - WorksMixedRangeWithUnevenBucketsAndNaNAndNaNRange (24 milliseconds)
[info] - WorksMixedRangeWithUnevenBucketsAndNaNAndNaNRangeAndInfinity (41 milliseconds)
[info] - WorksWithOutOfRangeWithInfiniteBuckets (35 milliseconds)
[info] - ThrowsExceptionOnInvalidBucketArray (2 milliseconds)
[info] - WorksWithoutBucketsBasic (128 milliseconds)
[info] - WorksWithoutBucketsBasicSingleElement (43 milliseconds)
[info] - WorksWithoutBucketsBasicNoRange (45 milliseconds)
[info] - WorksWithoutBucketsBasicTwo (43 milliseconds)
[info] - WorksWithDoubleValuesAtMinMax (89 milliseconds)
[info] - WorksWithoutBucketsWithMoreRequestedThanElements (47 milliseconds)
[info] - WorksWithoutBucketsForLargerDatasets (45 milliseconds)
[info] - WorksWithoutBucketsWithNonIntegralBucketEdges (41 milliseconds)
[info] - WorksWithHugeRange (728 milliseconds)
[info] - ThrowsExceptionOnInvalidRDDs (59 milliseconds)
[info] NextIteratorSuite:
[info] - one iteration (15 milliseconds)
[info] - two iterations (2 milliseconds)
[info] - empty iteration (2 milliseconds)
[info] - close is called once for empty iterations (1 millisecond)
[info] - close is called once for non-empty iterations (1 millisecond)
[info] SparkSubmitSuite:
[info] - prints usage on empty input (57 milliseconds)
[info] - prints usage with only --help (5 milliseconds)
[info] - prints error with unrecognized options (2 milliseconds)
[info] - handle binary specified but not class (110 milliseconds)
[info] - handles arguments with --key=val (5 milliseconds)
[info] - handles arguments to user program (4 milliseconds)
[info] - handles arguments to user program with name collision (2 milliseconds)
[info] - print the right queue name (15 milliseconds)
[info] - SPARK-24241: do not fail fast if executor num is 0 when dynamic allocation is enabled (3 milliseconds)
[info] - specify deploy mode through configuration (429 milliseconds)
[info] - handles YARN cluster mode (34 milliseconds)
[info] - handles YARN client mode (62 milliseconds)
[info] - handles standalone cluster mode (27 milliseconds)
[info] - handles legacy standalone cluster mode (25 milliseconds)
[info] - handles standalone client mode (61 milliseconds)
[info] - handles mesos client mode (59 milliseconds)
[info] - handles k8s cluster mode (36 milliseconds)
[info] - handles confs with flag equivalents (23 milliseconds)
[info] - Chain PersonalizedPageRank (7 seconds, 31 milliseconds)
[info] - SPARK-21568 ConsoleProgressBar should be enabled only in shells (129 milliseconds)
[info] - caching in memory and disk, replicated (encryption = off) (6 seconds, 719 milliseconds)
[info] - launch simple application with spark-submit (9 seconds, 94 milliseconds)
[info] - caching in memory and disk, replicated (encryption = off) (with replication as stream) (7 seconds, 205 milliseconds)
[info] - basic usage (2 minutes, 5 seconds)
[info] - caching in memory and disk, replicated (encryption = on) (6 seconds, 826 milliseconds)
[info] - launch simple application with spark-submit with redaction (10 seconds, 39 milliseconds)
[info] - Loop with source PageRank (22 seconds, 36 milliseconds)
[info] - caching in memory and disk, replicated (encryption = on) (with replication as stream) (5 seconds, 246 milliseconds)
[info] - caching in memory and disk, serialized, replicated (encryption = off) (5 seconds, 834 milliseconds)
[info] - includes jars passed in through --jars (18 seconds, 859 milliseconds)
[info] - caching in memory and disk, serialized, replicated (encryption = off) (with replication as stream) (10 seconds, 922 milliseconds)
[info] - Loop with sink PageRank (23 seconds, 3 milliseconds)
[info] EdgeRDDSuite:
[info] - cache, getStorageLevel (160 milliseconds)
[info] - checkpointing (509 milliseconds)
[info] - count (252 milliseconds)
[info] GraphSuite:
[info] - Graph.fromEdgeTuples (592 milliseconds)
[info] - Graph.fromEdges (241 milliseconds)
[info] - Graph.apply (719 milliseconds)
[info] - caching in memory and disk, serialized, replicated (encryption = on) (7 seconds, 496 milliseconds)
[info] - triplets (893 milliseconds)
[info] - caching in memory and disk, serialized, replicated (encryption = on) (with replication as stream) (8 seconds, 57 milliseconds)
[info] - includes jars passed in through --packages (20 seconds, 159 milliseconds)
[info] - partitionBy (12 seconds, 771 milliseconds)
[info] - mapVertices (449 milliseconds)
[info] - mapVertices changing type with same erased type (593 milliseconds)
[info] - mapEdges (286 milliseconds)
[info] - compute without caching when no partitions fit in memory (5 seconds, 878 milliseconds)
[info] - mapTriplets (440 milliseconds)
[info] - reverse (670 milliseconds)
[info] - reverse with join elimination (611 milliseconds)
[info] - subgraph (863 milliseconds)
[info] - mask (506 milliseconds)
[info] - groupEdges (716 milliseconds)
[info] - aggregateMessages (676 milliseconds)
[info] - outerJoinVertices (1 second, 5 milliseconds)
[info] - more edge partitions than vertex partitions (520 milliseconds)
[info] - checkpoint (490 milliseconds)
[info] - cache, getStorageLevel (126 milliseconds)
[info] - non-default number of edge partitions (691 milliseconds)
[info] - compute when only some partitions fit in memory (7 seconds, 175 milliseconds)
[info] - unpersist graph RDD (862 milliseconds)
[info] - SPARK-14219: pickRandomVertex (282 milliseconds)
[info] ShortestPathsSuite:
[info] - Shortest Path Computations (991 milliseconds)
[info] GraphOpsSuite:
[info] - joinVertices (413 milliseconds)
[info] - collectNeighborIds (723 milliseconds)
[info] - removeSelfEdges (503 milliseconds)
[info] - filter (786 milliseconds)
[info] - convertToCanonicalEdges (529 milliseconds)
[info] - collectEdgesCycleDirectionOut (803 milliseconds)
[info] - passing environment variables to cluster (5 seconds, 712 milliseconds)
[info] - includes jars passed through spark.jars.packages and spark.jars.repositories (17 seconds, 940 milliseconds)
[info] - correctly builds R packages included in a jar with --packages !!! IGNORED !!!
[info] - collectEdgesCycleDirectionIn (511 milliseconds)
[info] - collectEdgesCycleDirectionEither (472 milliseconds)
[info] - collectEdgesChainDirectionOut (406 milliseconds)
[info] - collectEdgesChainDirectionIn (445 milliseconds)
[info] - collectEdgesChainDirectionEither (439 milliseconds)
[info] StronglyConnectedComponentsSuite:
[info] - Island Strongly Connected Components (1 second, 55 milliseconds)
[info] - Cycle Strongly Connected Components (3 seconds, 793 milliseconds)
[info] - recover from node failures (7 seconds, 755 milliseconds)
[info] - 2 Cycle Strongly Connected Components (2 seconds, 833 milliseconds)
[info] VertexPartitionSuite:
[info] - isDefined, filter (10 milliseconds)
[info] - map (1 millisecond)
[info] - diff (2 milliseconds)
[info] - leftJoin (12 milliseconds)
[info] - innerJoin (10 milliseconds)
[info] - createUsingIndex (1 millisecond)
[info] - innerJoinKeepLeft (7 milliseconds)
[info] - aggregateUsingIndex (2 milliseconds)
[info] - reindex (6 milliseconds)
[info] - serialization (13 milliseconds)
[info] GraphLoaderSuite:
[info] - GraphLoader.edgeListFile (634 milliseconds)
[info] TriangleCountSuite:
[info] - Count a single triangle (653 milliseconds)
[info] - include an external JAR in SparkR (11 seconds, 712 milliseconds)
[info] - Count two triangles (683 milliseconds)
[info] - resolves command line argument paths correctly (149 milliseconds)
[info] - ambiguous archive mapping results in error message (21 milliseconds)
[info] - resolves config paths correctly (177 milliseconds)
[info] - Count two triangles with bi-directed edges (658 milliseconds)
[info] - Count a single triangle with duplicate edges (731 milliseconds)
[info] GraphGeneratorsSuite:
[info] - GraphGenerators.generateRandomEdges (3 milliseconds)
[info] - GraphGenerators.sampleLogNormal (7 milliseconds)
[info] - GraphGenerators.logNormalGraph (403 milliseconds)
[info] - SPARK-5064 GraphGenerators.rmatGraph numEdges upper bound (127 milliseconds)
[info] SVDPlusPlusSuite:
[info] - user classpath first in driver (2 seconds, 812 milliseconds)
[info] - SPARK_CONF_DIR overrides spark-defaults.conf (9 milliseconds)
[info] - Test SVD++ with mean square error on training set (1 second, 176 milliseconds)
[info] - support glob path (71 milliseconds)
[info] - downloadFile - invalid url (56 milliseconds)
[info] - downloadFile - file doesn't exist (45 milliseconds)
[info] - downloadFile does not download local file (35 milliseconds)
[info] - Test SVD++ with no edges (259 milliseconds)
[info] - download one file to local (55 milliseconds)
[info] - download list of files to local (47 milliseconds)
[info] - remove copies of application jar from classpath (50 milliseconds)
[info] - Avoid re-upload remote resources in yarn client mode (70 milliseconds)
[info] - download remote resource if it is not supported by yarn service (74 milliseconds)
[info] - avoid downloading remote resource if it is supported by yarn service (71 milliseconds)
[info] - force download from blacklisted schemes (70 milliseconds)
[info] - force download for all the schemes (68 milliseconds)
[info] - start SparkApplication without modifying system properties (71 milliseconds)
[info] - support --py-files/spark.submit.pyFiles in non pyspark application (125 milliseconds)
[info] - handles natural line delimiters in --properties-file and --conf uniformly (74 milliseconds)
[info] NettyRpcEnvSuite:
[info] - send a message locally (2 milliseconds)
[info] - send a message remotely (145 milliseconds)
[info] - send a RpcEndpointRef (2 milliseconds)
[info] - ask a message locally (2 milliseconds)
[info] - ask a message remotely (80 milliseconds)
[info] - ask a message timeout (93 milliseconds)
[info] - onStart and onStop (2 milliseconds)
[info] - onError: error in onStart (2 milliseconds)
[info] - onError: error in onStop (2 milliseconds)
[info] - onError: error in receive (2 milliseconds)
[info] - self: call in onStart (2 milliseconds)
[info] - self: call in receive (1 millisecond)
[info] - self: call in onStop (2 milliseconds)
[info] MesosSchedulerUtilsSuite:
[info] - call receive in sequence (699 milliseconds)
[info] - stop(RpcEndpointRef) reentrant (2 milliseconds)
[info] - sendWithReply (2 milliseconds)
[info] - sendWithReply: remotely (61 milliseconds)
[info] - sendWithReply: error (2 milliseconds)
[info] - use at-least minimum overhead (391 milliseconds)
[info] - sendWithReply: remotely error (153 milliseconds)
[info] - use overhead if it is greater than minimum value (4 milliseconds)
[info] - use spark.mesos.executor.memoryOverhead (if set) (4 milliseconds)
[info] - parse a non-empty constraint string correctly (23 milliseconds)
[info] - parse an empty constraint string correctly (1 millisecond)
[info] - throw an exception when the input is malformed (7 milliseconds)
[info] - empty values for attributes' constraints matches all values (35 milliseconds)
[info] - subset match is performed for set attributes (6 milliseconds)
[info] - less than equal match is performed on scalar attributes (8 milliseconds)
[info] - contains match is performed for range attributes (49 milliseconds)
[info] - equality match is performed for text attributes (2 milliseconds)
[info] - network events in sever RpcEnv when another RpcEnv is in server mode (175 milliseconds)
[info] - Port reservation is done correctly with user specified ports only (48 milliseconds)
[info] - Port reservation is done correctly with all random ports (4 milliseconds)
[info] - Port reservation is done correctly with user specified ports only - multiple ranges (4 milliseconds)
[info] - Port reservation is done correctly with all random ports - multiple ranges (3 milliseconds)
[info] - Principal specified via spark.mesos.principal (17 milliseconds)
[info] - Principal specified via spark.mesos.principal.file (18 milliseconds)
[info] - Principal specified via spark.mesos.principal.file that does not exist (3 milliseconds)
[info] - Principal specified via SPARK_MESOS_PRINCIPAL (2 milliseconds)
[info] - Principal specified via SPARK_MESOS_PRINCIPAL_FILE (2 milliseconds)
[info] - Principal specified via SPARK_MESOS_PRINCIPAL_FILE that does not exist (3 milliseconds)
[info] - Secret specified via spark.mesos.secret (2 milliseconds)
[info] - Principal specified via spark.mesos.secret.file (3 milliseconds)
[info] - Principal specified via spark.mesos.secret.file that does not exist (3 milliseconds)
[info] - Principal specified via SPARK_MESOS_SECRET (4 milliseconds)
[info] - Principal specified via SPARK_MESOS_SECRET_FILE (3 milliseconds)
[info] - Secret specified with no principal (2 milliseconds)
[info] - Principal specification preference (2 milliseconds)
[info] - Secret specification preference (1 millisecond)
[info] MesosSchedulerBackendUtilSuite:
[info] - network events in sever RpcEnv when another RpcEnv is in client mode (183 milliseconds)
[info] - ContainerInfo fails to parse invalid docker parameters (174 milliseconds)
[info] - ContainerInfo parses docker parameters (5 milliseconds)
[info] - recover from repeated node failures during shuffle-map (10 seconds, 355 milliseconds)
[info] - network events in client RpcEnv when another RpcEnv is in server mode (205 milliseconds)
[info] - SPARK-28778 ContainerInfo respects Docker network configuration (30 milliseconds)
[info] MesosFineGrainedSchedulerBackendSuite:
[info] - sendWithReply: unserializable error (75 milliseconds)
[info] - weburi is set in created scheduler driver (66 milliseconds)
[info] - port conflict (46 milliseconds)
[info] - Use configured mesosExecutor.cores for ExecutorInfo (69 milliseconds)
[info] - check spark-class location correctly (10 milliseconds)
[info] - spark docker properties correctly populate the DockerInfo message (18 milliseconds)
[info] - mesos resource offers result in launching tasks (83 milliseconds)
[info] - can handle multiple roles (8 milliseconds)
[info] MesosCoarseGrainedSchedulerBackendSuite:
[info] - send with authentication (437 milliseconds)
[info] - send with SASL encryption (127 milliseconds)
[info] - send with AES encryption (146 milliseconds)
[info] - ask with authentication (187 milliseconds)
[info] - ask with SASL encryption (96 milliseconds)
[info] - ask with AES encryption (118 milliseconds)
[info] - construct RpcTimeout with conf property (3 milliseconds)
[info] - ask a message timeout on Future using RpcTimeout (25 milliseconds)
[info] - file server (123 milliseconds)
[info] - SPARK-14699: RpcEnv.shutdown should not fire onDisconnected events (134 milliseconds)
[info] - non-existent endpoint (2 milliseconds)
[info] - advertise address different from bind address (82 milliseconds)
[info] - RequestMessage serialization (11 milliseconds)
Exception in thread "dispatcher-event-loop-0" java.lang.StackOverflowError
	at org.apache.spark.rpc.netty.NettyRpcEnvSuite$$anonfun$5$$anon$1$$anonfun$receiveAndReply$1.applyOrElse(NettyRpcEnvSuite.scala:113)
	at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:105)
	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:205)
	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:101)
	at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:221)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Exception in thread "dispatcher-event-loop-1" java.lang.StackOverflowError
	at org.apache.spark.rpc.netty.NettyRpcEnvSuite$$anonfun$5$$anon$1$$anonfun$receiveAndReply$1.applyOrElse(NettyRpcEnvSuite.scala:113)
	at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:105)
	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:205)
	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:101)
	at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:221)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
[info] - StackOverflowError should be sent back and Dispatcher should survive (133 milliseconds)
[info] JsonProtocolSuite:
[info] - SparkListenerEvent (432 milliseconds)
[info] - Dependent Classes (28 milliseconds)
[info] - ExceptionFailure backward compatibility: full stack trace (4 milliseconds)
[info] - StageInfo backward compatibility (details, accumulables) (3 milliseconds)
[info] - InputMetrics backward compatibility (2 milliseconds)
[info] - Input/Output records backwards compatibility (3 milliseconds)
[info] - Shuffle Read/Write records backwards compatibility (3 milliseconds)
[info] - OutputMetrics backward compatibility (2 milliseconds)
[info] - BlockManager events backward compatibility (2 milliseconds)
[info] - FetchFailed backwards compatibility (2 milliseconds)
[info] - ShuffleReadMetrics: Local bytes read backwards compatibility (3 milliseconds)
[info] - SparkListenerApplicationStart backwards compatibility (2 milliseconds)
[info] - ExecutorLostFailure backward compatibility (1 millisecond)
[info] - SparkListenerJobStart backward compatibility (6 milliseconds)
[info] - SparkListenerJobStart and SparkListenerJobEnd backward compatibility (7 milliseconds)
[info] - RDDInfo backward compatibility (scope, parent IDs, callsite) (3 milliseconds)
[info] - StageInfo backward compatibility (parent IDs) (2 milliseconds)
[info] - TaskCommitDenied backward compatibility (2 milliseconds)
[info] - AccumulableInfo backward compatibility (6 milliseconds)
[info] - mesos supports killing and limiting executors (2 seconds, 62 milliseconds)
[info] - ExceptionFailure backward compatibility: accumulator updates (32 milliseconds)
[info] - AccumulableInfo value de/serialization (15 milliseconds)
[info] - SPARK-31923: unexpected value type of internal accumulator (3 milliseconds)
[info] RPackageUtilsSuite:
[info] - mesos supports killing and relaunching tasks with executors (332 milliseconds)
[info] - pick which jars to unpack using the manifest (404 milliseconds)
[info] - mesos supports spark.executor.cores (290 milliseconds)
[info] - mesos supports unset spark.executor.cores (113 milliseconds)
[info] - mesos does not acquire more than spark.cores.max (108 milliseconds)
[info] - mesos does not acquire gpus if not specified (111 milliseconds)
[info] - mesos does not acquire more than spark.mesos.gpus.max (102 milliseconds)
[info] - mesos declines offers that violate attribute constraints (83 milliseconds)
[info] - mesos declines offers with a filter when reached spark.cores.max (98 milliseconds)
[info] - mesos declines offers with a filter when maxCores not a multiple of executor.cores (97 milliseconds)
[info] - mesos declines offers with a filter when reached spark.cores.max with executor.cores (116 milliseconds)
[info] - mesos assigns tasks round-robin on offers (96 milliseconds)
[info] - mesos creates multiple executors on a single slave (103 milliseconds)
[info] - mesos doesn't register twice with the same shuffle service (95 milliseconds)
[info] - Port offer decline when there is no appropriate range (92 milliseconds)
[info] - Port offer accepted when ephemeral ports are used (93 milliseconds)
[info] - Port offer accepted with user defined port numbers (94 milliseconds)
[info] - mesos kills an executor when told (87 milliseconds)
[info] - weburi is set in created scheduler driver (112 milliseconds)
[info] - failover timeout is set in created scheduler driver (127 milliseconds)
[info] - build an R package from a jar end to end (3 seconds, 108 milliseconds)
[info] - honors unset spark.mesos.containerizer (130 milliseconds)
[info] - honors spark.mesos.containerizer="mesos" (175 milliseconds)
[info] - jars that don't exist are skipped and print warning (334 milliseconds)
[info] - docker settings are reflected in created tasks (254 milliseconds)
[info] - force-pull-image option is disabled by default (112 milliseconds)
[info] - faulty R package shows documentation (584 milliseconds)
[info] - mesos supports spark.executor.uri (120 milliseconds)
[info] - jars without manifest return false (127 milliseconds)
[info] - SparkR zipping works properly (16 milliseconds)
[info] TopologyMapperSuite:
[info] - File based Topology Mapper (9 milliseconds)
[info] EventLoggingListenerSuite:
[info] - Verify log file exist (50 milliseconds)
[info] - mesos supports setting fetcher cache (210 milliseconds)
[info] - Basic event logging (104 milliseconds)
[info] - mesos supports disabling fetcher cache (87 milliseconds)
[info] - mesos sets task name to spark.app.name (107 milliseconds)
[info] - Basic event logging with compression (371 milliseconds)
[info] - mesos sets configurable labels on tasks (168 milliseconds)
[info] - mesos supports spark.mesos.network.name and spark.mesos.network.labels (139 milliseconds)
[info] - SPARK-28778 '--hostname' shouldn't be set for executor when virtual network is enabled (852 milliseconds)
[info] - supports spark.scheduler.minRegisteredResourcesRatio (152 milliseconds)
[info] - End-to-end event logging (4 seconds, 427 milliseconds)
[info] - supports data locality with dynamic allocation (6 seconds, 219 milliseconds)
[info] - Creates an env-based reference secrets. (320 milliseconds)
[info] - Creates an env-based value secrets. (198 milliseconds)
[info] - Creates file-based reference secrets. (172 milliseconds)
[info] - Creates a file-based value secrets. (221 milliseconds)
[info] MesosClusterSchedulerSuite:
[info] - can queue drivers (45 milliseconds)
[info] - can kill queued drivers (36 milliseconds)
[info] - can handle multiple roles (73 milliseconds)
[info] - escapes commandline args for the shell (73 milliseconds)
[info] - supports spark.mesos.driverEnv.* (36 milliseconds)
[info] - supports spark.mesos.network.name and spark.mesos.network.labels (35 milliseconds)
[info] - supports setting fetcher cache on the dispatcher (34 milliseconds)
[info] - supports setting fetcher cache in the submission (33 milliseconds)
[info] - supports disabling fetcher cache (33 milliseconds)
[info] - accept/decline offers with driver constraints (48 milliseconds)
[info] - supports spark.mesos.driver.labels (80 milliseconds)
[info] - can kill supervised drivers (38 milliseconds)
[info] - SPARK-27347: do not restart outdated supervised drivers (1 second, 546 milliseconds)
[info] - Declines offer with refuse seconds = 120. (55 milliseconds)
[info] - Creates an env-based reference secrets. (33 milliseconds)
[info] - Creates an env-based value secrets. (34 milliseconds)
[info] - Creates file-based reference secrets. (36 milliseconds)
[info] - Creates a file-based value secrets. (48 milliseconds)
[info] MesosClusterDispatcherSuite:
[info] - prints usage on empty input (48 milliseconds)
[info] - prints usage with only --help (3 milliseconds)
[info] - prints error with unrecognized options (2 milliseconds)
[info] MesosClusterManagerSuite:
[info] - mesos fine-grained (78 milliseconds)
[info] - mesos coarse-grained (79 milliseconds)
[info] - mesos with zookeeper (85 milliseconds)
[info] - mesos with i/o encryption throws error (829 milliseconds)
[info] MesosClusterDispatcherArgumentsSuite:
(spark.testing,true)
(spark.ui.showConsoleProgress,false)
(spark.master.rest.enabled,false)
(spark.test.home,/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6)
(spark.ui.enabled,false)
(spark.unsafe.exceptionOnMemoryLeak,true)
(spark.mesos.key2,value2)
(spark.memory.debugFill,true)
(spark.port.maxRetries,100)
[info] - test if spark config args are passed successfully (15 milliseconds)
(spark.testing,true)
(spark.ui.showConsoleProgress,false)
(spark.master.rest.enabled,false)
(spark.test.home,/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6)
(spark.ui.enabled,false)
(spark.unsafe.exceptionOnMemoryLeak,true)
(spark.memory.debugFill,true)
(spark.port.maxRetries,100)
[info] - test non conf settings (3 milliseconds)
[info] MesosProtoUtilsSuite:
[info] - mesosLabels (3 milliseconds)
[info] ExecutorPodsSnapshotSuite:
[info] - States are interpreted correctly from pod metadata. (235 milliseconds)
[info] - Updates add new pods for non-matching ids and edit existing pods for matching ids (7 milliseconds)
[info] EnvSecretsFeatureStepSuite:
[info] - sets up all keyRefs (25 milliseconds)
[info] RDriverFeatureStepSuite:
[info] - R Step modifies container correctly (98 milliseconds)
[info] ExecutorPodsPollingSnapshotSourceSuite:
[info] - Items returned by the API should be pushed to the event queue (24 milliseconds)
[info] BasicExecutorFeatureStepSuite:
[info] - basic executor pod has reasonable defaults (54 milliseconds)
[info] - executor pod hostnames get truncated to 63 characters (5 milliseconds)
[info] - classpath and extra java options get translated into environment variables (6 milliseconds)
[info] - test executor pyspark memory (6 milliseconds)
[info] DriverKubernetesCredentialsFeatureStepSuite:
[info] - Don't set any credentials (14 milliseconds)
[info] - Only set credentials that are manually mounted. (5 milliseconds)
[info] - Mount credentials from the submission client as a secret. (90 milliseconds)
[info] ClientSuite:
[info] - The client should configure the pod using the builder. (24 milliseconds)
[info] - The client should create Kubernetes resources (5 milliseconds)
[info] - Waiting for app completion should stall on the watcher (6 milliseconds)
[info] DriverServiceFeatureStepSuite:
[info] - Headless service has a port for the driver RPC and the block manager. (21 milliseconds)
[info] - Hostname and ports are set according to the service name. (2 milliseconds)
[info] - Ports should resolve to defaults in SparkConf and in the service. (2 milliseconds)
[info] - Long prefixes should switch to using a generated name. (7 milliseconds)
[info] - Disallow bind address and driver host to be set explicitly. (1 millisecond)
[info] KubernetesDriverBuilderSuite:
[info] - Apply fundamental steps all the time. (7 milliseconds)
[info] - Apply secrets step if secrets are present. (6 milliseconds)
[info] - Apply Java step if main resource is none. (5 milliseconds)
[info] - Apply Python step if main resource is python. (5 milliseconds)
[info] - Apply volumes step if mounts are present. (10 milliseconds)
[info] - Apply R step if main resource is R. (4 milliseconds)
[info] ExecutorPodsAllocatorSuite:
[info] - Initially request executors in batches. Do not request another batch if the first has not finished. (29 milliseconds)
[info] - Request executors in batches. Allow another batch to be requested if all pending executors start running. (23 milliseconds)
[info] - When a current batch reaches error states immediately, re-request them on the next batch. (19 milliseconds)
[info] - When an executor is requested but the API does not report it in a reasonable time, retry requesting that executor. (7 milliseconds)
[info] KubernetesClusterSchedulerBackendSuite:
[info] - Start all components (7 milliseconds)
[info] - Stop all components (22 milliseconds)
[info] - Remove executor (3 milliseconds)
[info] - Kill executors (35 milliseconds)
[info] - Request total executors (3 milliseconds)
[info] KubernetesConfSuite:
[info] - Basic driver translated fields. (7 milliseconds)
[info] - Creating driver conf with and without the main app jar influences spark.jars (8 milliseconds)
[info] - Creating driver conf with a python primary file (9 milliseconds)
[info] - Creating driver conf with a r primary file (2 milliseconds)
[info] - Testing explicit setting of memory overhead on non-JVM tasks (3 milliseconds)
[info] - Resolve driver labels, annotations, secret mount paths, envs, and memory overhead (5 milliseconds)
[info] - Basic executor translated fields. (2 milliseconds)
[info] - Image pull secrets. (3 milliseconds)
[info] - Set executor labels, annotations, and secrets (5 milliseconds)
[info] KubernetesVolumeUtilsSuite:
[info] - Parses hostPath volumes correctly (6 milliseconds)
[info] - Parses persistentVolumeClaim volumes correctly (4 milliseconds)
[info] - Parses emptyDir volumes correctly (3 milliseconds)
[info] - Parses emptyDir volume options can be optional (2 milliseconds)
[info] - Defaults optional readOnly to false (2 milliseconds)
[info] - Gracefully fails on missing mount key (2 milliseconds)
[info] - Gracefully fails on missing option key (2 milliseconds)
[info] BasicDriverFeatureStepSuite:
[info] - Check the pod respects all configurations from the user. (15 milliseconds)
[info] - Check appropriate entrypoint rerouting for various bindings (5 milliseconds)
[info] - Additional system properties resolve jars and set cluster-mode confs. (4 milliseconds)
[info] ExecutorPodsSnapshotsStoreSuite:
[info] - Subscribers get notified of events periodically. (8 milliseconds)
[info] - Even without sending events, initially receive an empty buffer. (2 milliseconds)
[info] - Replacing the snapshot passes the new snapshot to subscribers. (5 milliseconds)
[info] MountVolumesFeatureStepSuite:
[info] - Mounts hostPath volumes (8 milliseconds)
[info] - Mounts pesistentVolumeClaims (4 milliseconds)
[info] - Mounts emptyDir (6 milliseconds)
[info] - Mounts emptyDir with no options (3 milliseconds)
[info] - Mounts multiple volumes (5 milliseconds)
[info] MountSecretsFeatureStepSuite:
[info] - mounts all given secrets (7 milliseconds)
[info] ExecutorPodsLifecycleManagerSuite:
[info] - When an executor reaches error states immediately, remove from the scheduler backend. (53 milliseconds)
[info] - Don't remove executors twice from Spark but remove from K8s repeatedly. (5 milliseconds)
[info] - When the scheduler backend lists executor ids that aren't present in the cluster, remove those executors from Spark. (4 milliseconds)
[info] JavaDriverFeatureStepSuite:
[info] - Java Step modifies container correctly (3 milliseconds)
[info] ExecutorPodsWatchSnapshotSourceSuite:
[info] - Watch events should be pushed to the snapshots store as snapshot updates. (4 milliseconds)
[info] LocalDirsFeatureStepSuite:
[info] - Resolve to default local dir if neither env nor configuration are set (78 milliseconds)
[info] - Use configured local dirs split on comma if provided. (3 milliseconds)
[info] PythonDriverFeatureStepSuite:
[info] - Python Step modifies container correctly (11 milliseconds)
[info] - Python Step testing empty pyfiles (2 milliseconds)
[info] KubernetesExecutorBuilderSuite:
[info] - Basic steps are consistently applied. (4 milliseconds)
[info] - Apply secrets step if secrets are present. (2 milliseconds)
[info] - Apply volumes step if mounts are present. (2 milliseconds)
[info] FailureTrackerSuite:
[info] - failures expire if validity interval is set (318 milliseconds)
[info] - failures never expire if validity interval is not set (-1) (4 milliseconds)
[info] ClientSuite:
[info] - default Yarn application classpath (106 milliseconds)
[info] - default MR application classpath (1 millisecond)
[info] - resultant classpath for an application that defines a classpath for YARN (407 milliseconds)
[info] - resultant classpath for an application that defines a classpath for MR (24 milliseconds)
[info] - resultant classpath for an application that defines both classpaths, YARN and MR (35 milliseconds)
[info] - recover from repeated node failures during shuffle-reduce (27 seconds, 132 milliseconds)
[info] - Local jar URIs (520 milliseconds)
[info] - Jar path propagation through SparkConf (650 milliseconds)
[info] - Cluster path translation (34 milliseconds)
[info] - configuration and args propagate through createApplicationSubmissionContext (126 milliseconds)
[info] - spark.yarn.jars with multiple paths and globs (316 milliseconds)
[info] - distribute jars archive (276 milliseconds)
[info] - distribute archive multiple times (869 milliseconds)
[info] - End-to-end event logging with compression (17 seconds, 863 milliseconds)
[info] - distribute local spark jars (222 milliseconds)
[info] - Event logging with password redaction (118 milliseconds)
[info] - ignore same name jars (137 milliseconds)
[info] - SPARK-31582 Being able to not populate Hadoop classpath (69 milliseconds)
[info] - files URI match test1 (2 milliseconds)
[info] - files URI match test2 (1 millisecond)
[info] - files URI match test3 (1 millisecond)
[info] - wasb URI match test (1 millisecond)
[info] - hdfs URI match test (0 milliseconds)
[info] - Log overwriting (116 milliseconds)
[info] - files URI unmatch test1 (2 milliseconds)
[info] - files URI unmatch test2 (1 millisecond)
[info] - Event log name (0 milliseconds)
[info] - files URI unmatch test3 (0 milliseconds)
[info] - wasb URI unmatch test1 (3 milliseconds)
[info] - wasb URI unmatch test2 (1 millisecond)
[info] - s3 URI unmatch test (1 millisecond)
[info] - hdfs URI unmatch test1 (1 millisecond)
[info] - hdfs URI unmatch test2 (1 millisecond)
[info] FileCommitProtocolInstantiationSuite:
[info] - Dynamic partitions require appropriate constructor (2 milliseconds)
[info] - Standard partitions work with classic constructor (1 millisecond)
[info] - Three arg constructors have priority (1 millisecond)
[info] - Three arg constructors have priority when dynamic (1 millisecond)
[info] - The protocol must be of the correct class (1 millisecond)
[info] - If there is no matching constructor, class hierarchy is irrelevant (1 millisecond)
[info] YarnAllocatorSuite:
[info] JobCancellationSuite:
[info] - local mode, FIFO scheduler (270 milliseconds)
[info] - single container allocated (237 milliseconds)
[info] - container should not be created if requested number if met (84 milliseconds)
[info] - some containers allocated (60 milliseconds)
[info] - local mode, fair scheduler (248 milliseconds)
[info] - receive more containers than requested (65 milliseconds)
[info] - decrease total requested executors (53 milliseconds)
[info] - decrease total requested executors to less than currently running (41 milliseconds)
[info] - kill executors (58 milliseconds)
[info] - kill same executor multiple times (51 milliseconds)
[info] - process same completed container multiple times (52 milliseconds)
[info] - lost executor removed from backend (51 milliseconds)
[info] - blacklisted nodes reflected in amClient requests (77 milliseconds)
[info] - memory exceeded diagnostic regexes (1 millisecond)
[info] - window based failure executor counting (44 milliseconds)
[info] - SPARK-26269: YarnAllocator should have same blacklist behaviour with YARN (104 milliseconds)
[info] ClientDistributedCacheManagerSuite:
[info] - test getFileStatus empty (37 milliseconds)
[info] - test getFileStatus cached (2 milliseconds)
[info] - test addResource (5 milliseconds)
[info] - test addResource link null (3 milliseconds)
[info] - test addResource appmaster only (3 milliseconds)
[info] - test addResource archive (4 milliseconds)
[info] ExtensionServiceIntegrationSuite:
[info] - Instantiate (7 milliseconds)
[info] - Contains SimpleExtensionService Service (3 milliseconds)
[info] YarnAllocatorBlacklistTrackerSuite:
[info] - expiring its own blacklisted nodes (2 milliseconds)
[info] - not handling the expiry of scheduler blacklisted nodes (2 milliseconds)
[info] - combining scheduler and allocation blacklist (4 milliseconds)
[info] - blacklist all available nodes (3 milliseconds)
[info] YarnClusterSuite:
[info] - cluster mode, FIFO scheduler (4 seconds, 415 milliseconds)
[info] - recover from node failures with replication (12 seconds, 115 milliseconds)
[info] - cluster mode, fair scheduler (5 seconds, 150 milliseconds)
[info] - do not put partially executed partitions into cache (107 milliseconds)
[info] - job group (101 milliseconds)
[info] - inherited job group (SPARK-6629) (89 milliseconds)
[info] - job group with interruption (158 milliseconds)
[info] - compacted topic (2 minutes, 5 seconds)
[info] - iterator boundary conditions (578 milliseconds)
[info] - executor sorting (14 milliseconds)
[info] - unpersist RDDs (5 seconds, 473 milliseconds)
[info] DirectKafkaStreamSuite:
[info] - reference partitions inside a task (3 seconds, 159 milliseconds)
[info] - basic stream receiving with multiple topics and smallest starting offset (3 seconds, 587 milliseconds)
[info] ReceiverTrackerSuite:
[info] - pattern based subscription (1 second, 998 milliseconds)
[info] - receiving from largest starting offset (618 milliseconds)
[info] - send rate update to receivers (2 seconds, 894 milliseconds)
[info] - creating stream by offset (553 milliseconds)
[info] - should restart receiver after stopping it (965 milliseconds)
[info] - SPARK-11063: TaskSetManager should use Receiver RDD's preferredLocations (564 milliseconds)
[info] - get allocated executors (801 milliseconds)
[info] RateLimitedOutputStreamSuite:
[info] - offset recovery (4 seconds, 366 milliseconds)
[info] - offset recovery from kafka (1 second, 70 milliseconds)
[info] - Direct Kafka stream report input information (696 milliseconds)
[info] - maxMessagesPerPartition with backpressure disabled (167 milliseconds)
[info] - write (4 seconds, 178 milliseconds)
[info] RecurringTimerSuite:
[info] - basic (11 milliseconds)
[info] - SPARK-10224: call 'callback' after stopping (24 milliseconds)
[info] InputStreamsSuite:
[info] - maxMessagesPerPartition with no lag (107 milliseconds)
[info] - maxMessagesPerPartition respects max rate (114 milliseconds)
Exception in thread "receiver-supervisor-future-0" java.lang.Error: java.lang.InterruptedException: sleep interrupted
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1155)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.InterruptedException: sleep interrupted
	at java.lang.Thread.sleep(Native Method)
	at org.apache.spark.streaming.receiver.ReceiverSupervisor$$anonfun$restartReceiver$1.apply$mcV$sp(ReceiverSupervisor.scala:196)
	at org.apache.spark.streaming.receiver.ReceiverSupervisor$$anonfun$restartReceiver$1.apply(ReceiverSupervisor.scala:189)
	at org.apache.spark.streaming.receiver.ReceiverSupervisor$$anonfun$restartReceiver$1.apply(ReceiverSupervisor.scala:189)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	... 2 more
[info] - socket input stream (949 milliseconds)
[info] - socket input stream - no block in a batch (565 milliseconds)
[info] - task reaper kills JVM if killed tasks keep running for too long (19 seconds, 525 milliseconds)
[info] - using rate controller (2 seconds, 665 milliseconds)
[info] - run Spark in yarn-client mode *** FAILED *** (26 seconds, 218 milliseconds)
[info]   FAILED did not equal FINISHED (stdout/stderr was not captured) (BaseYarnClusterSuite.scala:201)
[info]   org.scalatest.exceptions.TestFailedException:
[info]   at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:528)
[info]   at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1560)
[info]   at org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:501)
[info]   at org.apache.spark.deploy.yarn.BaseYarnClusterSuite.checkResult(BaseYarnClusterSuite.scala:201)
[info]   at org.apache.spark.deploy.yarn.YarnClusterSuite.org$apache$spark$deploy$yarn$YarnClusterSuite$$testBasicYarnApp(YarnClusterSuite.scala:242)
[info]   at org.apache.spark.deploy.yarn.YarnClusterSuite$$anonfun$1.apply$mcV$sp(YarnClusterSuite.scala:84)
[info]   at org.apache.spark.deploy.yarn.YarnClusterSuite$$anonfun$1.apply(YarnClusterSuite.scala:84)
[info]   at org.apache.spark.deploy.yarn.YarnClusterSuite$$anonfun$1.apply(YarnClusterSuite.scala:84)
[info]   at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[info]   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info]   at org.scalatest.Transformer.apply(Transformer.scala:22)
[info]   at org.scalatest.Transformer.apply(Transformer.scala:20)
[info]   at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
[info]   at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:147)
[info]   at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:183)
[info]   at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:196)
[info]   at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:196)
[info]   at org.scalatest.SuperEngine.runTestImpl(Engine.scala:289)
[info]   at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:196)
[info]   at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:54)
[info]   at org.scalatest.BeforeAndAfterEach$class.runTest(BeforeAndAfterEach.scala:221)
[info]   at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:54)
[info]   at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:229)
[info]   at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:229)
[info]   at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:396)
[info]   at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:384)
[info]   at scala.collection.immutable.List.foreach(List.scala:392)
[info]   at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:384)
[info]   at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:379)
[info]   at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:461)
[info]   at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:229)
[info]   at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
[info]   at org.scalatest.Suite$class.run(Suite.scala:1147)
[info]   at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
[info]   at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:233)
[info]   at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:233)
[info]   at org.scalatest.SuperEngine.runImpl(Engine.scala:521)
[info]   at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:233)
[info]   at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:54)
[info]   at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:213)
[info]   at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:210)
[info]   at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:54)
[info]   at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:314)
[info]   at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:480)
[info]   at sbt.ForkMain$Run$2.call(ForkMain.java:296)
[info]   at sbt.ForkMain$Run$2.call(ForkMain.java:286)
[info]   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info]   at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[info]   at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[info]   at java.lang.Thread.run(Thread.java:748)
[info] - backpressure.initialRate should honor maxRatePerPartition (1 second, 18 milliseconds)
[info] - use backpressure.initialRate with backpressure (912 milliseconds)
[info] - maxMessagesPerPartition with zero offset and rate equal to the specified minimum with default 1 (66 milliseconds)
[info] KafkaDataConsumerSuite:
[info] - KafkaDataConsumer reuse in case of same groupId and TopicPartition (6 milliseconds)
[info] - binary records stream (6 seconds, 374 milliseconds)
[info] - file input stream - newFilesOnly = true (799 milliseconds)
[info] - concurrent use of KafkaDataConsumer (1 second, 954 milliseconds)
[info] - file input stream - newFilesOnly = false (874 milliseconds)
[info] - file input stream - wildcard (960 milliseconds)
[info] Test run started
[info] Test org.apache.spark.streaming.kafka010.JavaLocationStrategySuite.testLocationStrategyConstructors started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.008s
[info] Test run started
[info] Test org.apache.spark.streaming.kafka010.JavaKafkaRDDSuite.testKafkaRDD started
[info] - multi-thread receiver (1 second, 994 milliseconds)
[info] - queue input stream - oneAtATime = true (1 second, 95 milliseconds)
[info] - task reaper will not kill JVM if spark.task.killTimeout == -1 (13 seconds, 597 milliseconds)
[info] - two jobs sharing the same stage (137 milliseconds)
[info] - queue input stream - oneAtATime = false (2 seconds, 110 milliseconds)
[info] - interruptible iterator of shuffle reader (239 milliseconds)
[info] Test run finished: 0 failed, 0 ignored, 1 total, 3.486s
[info] Test run started
[info] Test org.apache.spark.streaming.kafka010.JavaConsumerStrategySuite.testConsumerStrategyConstructors started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.002s
[info] Test run started
[info] Test org.apache.spark.streaming.kafka010.JavaDirectKafkaStreamSuite.testKafkaStream started
[info] - test track the number of input stream (131 milliseconds)
[info] TaskContextSuite:
[info] WriteAheadLogUtilsSuite:
[info] - log selection and creation (34 milliseconds)
[info] - wrap WriteAheadLog in BatchedWriteAheadLog when batching is enabled (5 milliseconds)
[info] - batching is enabled by default in WriteAheadLog (1 millisecond)
[info] - closeFileAfterWrite is disabled by default in WriteAheadLog (0 milliseconds)
[info] ReceiverSchedulingPolicySuite:
[info] - rescheduleReceiver: empty executors (1 millisecond)
[info] - rescheduleReceiver: receiver preferredLocation (1 millisecond)
[info] - rescheduleReceiver: return all idle executors if there are any idle executors (6 milliseconds)
[info] - rescheduleReceiver: return all executors that have minimum weight if no idle executors (3 milliseconds)
[info] - scheduleReceivers: schedule receivers evenly when there are more receivers than executors (4 milliseconds)
[info] - scheduleReceivers: schedule receivers evenly when there are more executors than receivers (4 milliseconds)
[info] - scheduleReceivers: schedule receivers evenly when the preferredLocations are even (7 milliseconds)
[info] - scheduleReceivers: return empty if no receiver (0 milliseconds)
[info] - scheduleReceivers: return empty scheduled executors if no executors (2 milliseconds)
[info] - provide metrics sources (109 milliseconds)
[info] PIDRateEstimatorSuite:
[info] - the right estimator is created (14 milliseconds)
[info] - estimator checks ranges (2 milliseconds)
[info] - first estimate is None (2 milliseconds)
[info] - second estimate is not None (1 millisecond)
[info] - no estimate when no time difference between successive calls (1 millisecond)
[info] - no estimate when no records in previous batch (1 millisecond)
[info] - no estimate when there is no processing delay (1 millisecond)
[info] - calls TaskCompletionListener after failure (71 milliseconds)
[info] - estimate is never less than min rate (22 milliseconds)
[info] - with no accumulated or positive error, |I| > 0, follow the processing speed (5 milliseconds)
[info] - with no accumulated but some positive error, |I| > 0, follow the processing speed (7 milliseconds)
[info] - with some accumulated and some positive error, |I| > 0, stay below the processing speed (17 milliseconds)
[info] - calls TaskFailureListeners after failure (63 milliseconds)
[info] ReceivedBlockHandlerSuite:
[info] - all TaskCompletionListeners should be called even if some fail (7 milliseconds)
[info] - all TaskFailureListeners should be called even if some fail (9 milliseconds)
[info] - TaskContext.attemptNumber should return attempt number, not task id (SPARK-4014) (86 milliseconds)
[info] - TaskContext.stageAttemptNumber getter (621 milliseconds)
[info] - BlockManagerBasedBlockHandler - store blocks (745 milliseconds)
[info] - BlockManagerBasedBlockHandler - handle errors in storing block (26 milliseconds)
[info] - accumulators are updated on exception failures (237 milliseconds)
[info] - failed tasks collect only accumulators whose values count during failures (63 milliseconds)
[info] - only updated internal accumulators will be sent back to driver (54 milliseconds)
[info] - localProperties are propagated to executors correctly (93 milliseconds)
[info] - immediately call a completion listener if the context is completed (1 millisecond)
[info] - immediately call a failure listener if the context has failed (1 millisecond)
[info] - TaskCompletionListenerException.getMessage should include previousError (1 millisecond)
[info] - all TaskCompletionListeners should be called even if some fail or a task (4 milliseconds)
[info] DAGSchedulerSuite:
[info] - [SPARK-3353] parent stage should have lower stage id (80 milliseconds)
[info] - [SPARK-13902] Ensure no duplicate stages are created (25 milliseconds)
[info] - WriteAheadLogBasedBlockHandler - store blocks (898 milliseconds)
[info] - All shuffle files on the slave should be cleaned up when slave lost (110 milliseconds)
[info] - WriteAheadLogBasedBlockHandler - handle errors in storing block (60 milliseconds)
[info] - WriteAheadLogBasedBlockHandler - clean old blocks (183 milliseconds)
[info] - zero split job (19 milliseconds)
[info] - run trivial job (8 milliseconds)
[info] - Test Block - count messages (307 milliseconds)
[info] - Test Block - isFullyConsumed (50 milliseconds)
[info] - run trivial job w/ dependency (9 milliseconds)
[info] - equals and hashCode AccumulableInfo (1 millisecond)
[info] - cache location preferences w/ dependency (13 milliseconds)
[info] ReceivedBlockHandlerWithEncryptionSuite:
[info] - regression test for getCacheLocs (3 milliseconds)
[info] - getMissingParentStages should consider all ancestor RDDs' cache statuses (9 milliseconds)
[info] - avoid exponential blowup when getting preferred locs list (69 milliseconds)
[info] - BlockManagerBasedBlockHandler - store blocks (555 milliseconds)
[info] - unserializable task (8 milliseconds)
[info] - BlockManagerBasedBlockHandler - handle errors in storing block (16 milliseconds)
[info] - trivial job failure (6 milliseconds)
[info] - trivial job cancellation (6 milliseconds)
[info] - job cancellation no-kill backend (18 milliseconds)
[info] - run trivial shuffle (11 milliseconds)
[info] - run trivial shuffle with fetch failure (23 milliseconds)
[info] - WriteAheadLogBasedBlockHandler - store blocks (809 milliseconds)
[info] - WriteAheadLogBasedBlockHandler - handle errors in storing block (28 milliseconds)
[info] - shuffle files not lost when slave lost with shuffle service (128 milliseconds)
[info] - WriteAheadLogBasedBlockHandler - clean old blocks (129 milliseconds)
[info] - shuffle files lost when worker lost with shuffle service (108 milliseconds)
[info] - Test Block - count messages (206 milliseconds)
[info] - Test Block - isFullyConsumed (32 milliseconds)
[info] InputInfoTrackerSuite:
[info] - shuffle files lost when worker lost without shuffle service (124 milliseconds)
[info] - test report and get InputInfo from InputInfoTracker (1 millisecond)
[info] - test cleanup InputInfo from InputInfoTracker (0 milliseconds)
[info] JobGeneratorSuite:
[info] - shuffle files not lost when executor failure with shuffle service (103 milliseconds)
[info] - shuffle files lost when executor failure without shuffle service (127 milliseconds)
[info] - Single stage fetch failure should not abort the stage. (39 milliseconds)
[info] - Multiple consecutive stage fetch failures should lead to job being aborted. (60 milliseconds)
[info] - Failures in different stages should not trigger an overall abort (96 milliseconds)
[info] - Non-consecutive stage failures don't trigger abort (90 milliseconds)
[info] - trivial shuffle with multiple fetch failures (12 milliseconds)
[info] - Retry all the tasks on a resubmitted attempt of a barrier stage caused by FetchFailure (43 milliseconds)
[info] - Retry all the tasks on a resubmitted attempt of a barrier stage caused by TaskKilled (30 milliseconds)
[info] Test run finished: 0 failed, 0 ignored, 1 total, 7.032s
[info] - Fail the job if a barrier ResultTask failed (14 milliseconds)
[info] - late fetch failures don't cause multiple concurrent attempts for the same map stage (30 milliseconds)
[info] - extremely late fetch failures don't cause multiple concurrent attempts for the same stage (39 milliseconds)
[info] - task events always posted in speculation / when stage is killed (29 milliseconds)
[info] - ignore late map task completions (17 milliseconds)
[info] - run shuffle with map stage failure (7 milliseconds)
[info] - SPARK-6222: Do not clear received block data too soon (2 seconds, 523 milliseconds)
[info] ReceivedBlockTrackerSuite:
[info] - block addition, and block to batch allocation (7 milliseconds)
[info] - shuffle fetch failure in a reused shuffle dependency (21 milliseconds)
[info] - don't submit stage until its dependencies map outputs are registered (SPARK-5259) (28 milliseconds)
[info] - register map outputs correctly after ExecutorLost and task Resubmitted (16 milliseconds)
[info] - failure of stage used by two jobs (54 milliseconds)
[info] SparkAWSCredentialsBuilderSuite:
[info] - should build DefaultCredentials when given no params (32 milliseconds)
[info] - should build BasicCredentials (16 milliseconds)
[info] - should build STSCredentials (2 milliseconds)
[info] - SparkAWSCredentials classes should be serializable (7 milliseconds)
[info] - stage used by two jobs, the first no longer active (SPARK-6880) (16 milliseconds)
[info] KinesisCheckpointerSuite:
[info] - stage used by two jobs, some fetch failures, and the first job no longer active (SPARK-6880) (26 milliseconds)
[info] - checkpoint is not called twice for the same sequence number (41 milliseconds)
[info] - checkpoint is called after sequence number increases (2 milliseconds)
[info] - should checkpoint if we have exceeded the checkpoint interval (17 milliseconds)
[info] - shouldn't checkpoint if we have not exceeded the checkpoint interval (1 millisecond)
[info] - run trivial shuffle with out-of-band executor failure and retry (19 milliseconds)
[info] - should not checkpoint for the same sequence number (3 milliseconds)
[info] - removing checkpointer checkpoints one last time (2 milliseconds)
[info] - if checkpointing is going on, wait until finished before removing and checkpointing (102 milliseconds)
[info] - recursive shuffle failures (28 milliseconds)
[info] - cached post-shuffle (29 milliseconds)
[info] - misbehaved accumulator should not crash DAGScheduler and SparkContext (31 milliseconds)
[info] - misbehaved accumulator should not impact other accumulators (21 milliseconds)
[info] - misbehaved resultHandler should not crash DAGScheduler and SparkContext (36 milliseconds)
[info] - getPartitions exceptions should not crash DAGScheduler and SparkContext (SPARK-8606) (19 milliseconds)
[info] - getPreferredLocations errors should not crash DAGScheduler and SparkContext (SPARK-8606) (59 milliseconds)
[info] - accumulator not calculated for resubmitted result stage (7 milliseconds)
[info] - accumulator not calculated for resubmitted task in result stage (5 milliseconds)
[info] - accumulators are updated on exception failures and task killed (6 milliseconds)
[info] - reduce tasks should be placed locally with map output (13 milliseconds)
[info] - reduce task locality preferences should only include machines with largest map outputs (13 milliseconds)
[info] KinesisInputDStreamBuilderSuite:
[info] - should raise an exception if the StreamingContext is missing (4 milliseconds)
[info] - should raise an exception if the stream name is missing (5 milliseconds)
[info] - should raise an exception if the checkpoint app name is missing (2 milliseconds)
[info] - stages with both narrow and shuffle dependencies use narrow ones for locality (11 milliseconds)
[info] - Spark exceptions should include call site in stack trace (21 milliseconds)
[info] - should propagate required values to KinesisInputDStream (342 milliseconds)
[info] - should propagate default values to KinesisInputDStream (11 milliseconds)
[info] - should propagate custom non-auth values to KinesisInputDStream (28 milliseconds)
[info] - old Api should throw UnsupportedOperationExceptionexception with AT_TIMESTAMP (4 milliseconds)
[info] - catch errors in event loop (18 milliseconds)
[info] KinesisReceiverSuite:
[info] - process records including store and set checkpointer (7 milliseconds)
[info] - split into multiple processes if a limitation is set (3 milliseconds)
[info] - shouldn't store and update checkpointer when receiver is stopped (4 milliseconds)
[info] - shouldn't update checkpointer when exception occurs during store (13 milliseconds)
[info] - simple map stage submission (26 milliseconds)
[info] - shutdown should checkpoint if the reason is TERMINATE (10 milliseconds)
[info] - shutdown should not checkpoint if the reason is something other than TERMINATE (2 milliseconds)
[info] - retry success on first attempt (3 milliseconds)
[info] - retry success on second attempt after a Kinesis throttling exception (36 milliseconds)
[info] - retry success on second attempt after a Kinesis dependency exception (38 milliseconds)
[info] - retry failed after a shutdown exception (5 milliseconds)
[info] - retry failed after an invalid state exception (5 milliseconds)
[info] - retry failed after unexpected exception (5 milliseconds)
[info] - map stage submission with reduce stage also depending on the data (20 milliseconds)
[info] - retry failed after exhausting all retries (62 milliseconds)
[info] WithAggregationKinesisBackedBlockRDDSuite:
[info] - Basic reading from Kinesis [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Read data available in both block manager and Kinesis [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Read data available only in block manager, not in Kinesis [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Read data available only in Kinesis, not in block manager [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Read data available partially in block manager, rest in Kinesis [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Test isBlockValid skips block fetching from block manager [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Test whether RDD is valid after removing blocks from block manager [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] WithoutAggregationKinesisBackedBlockRDDSuite:
[info] - Basic reading from Kinesis [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Read data available in both block manager and Kinesis [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Read data available only in block manager, not in Kinesis [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Read data available only in Kinesis, not in block manager [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Read data available partially in block manager, rest in Kinesis [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Test isBlockValid skips block fetching from block manager [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Test whether RDD is valid after removing blocks from block manager [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - map stage submission with fetch failure (29 milliseconds)
[info] WithAggregationKinesisStreamSuite:
[info] - map stage submission with multiple shared stages and failures (34 milliseconds)
[info] - KinesisUtils API (39 milliseconds)
[info] - RDD generation (35 milliseconds)
[info] - Trigger mapstage's job listener in submitMissingTasks (27 milliseconds)
[info] - basic operation [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - custom message handling [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Kinesis read with custom configurations (10 milliseconds)
[info] - split and merge shards in a stream [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - failure recovery [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Prepare KinesisTestUtils [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] WithoutAggregationKinesisStreamSuite:
[info] - map stage submission with executor failure late map task completions (17 milliseconds)
[info] - KinesisUtils API (3 milliseconds)
[info] - RDD generation (7 milliseconds)
[info] - basic operation [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - custom message handling [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Kinesis read with custom configurations (6 milliseconds)
[info] - split and merge shards in a stream [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - failure recovery [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Prepare KinesisTestUtils [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - getShuffleDependencies correctly returns only direct shuffle parents (2 milliseconds)
[info] Test run started
[info] Test org.apache.spark.streaming.kinesis.JavaKinesisStreamSuite.testCustomHandlerAwsStsCreds started
[info] Test org.apache.spark.streaming.kinesis.JavaKinesisStreamSuite.testCustomHandlerAwsCreds started
[info] Test org.apache.spark.streaming.kinesis.JavaKinesisStreamSuite.testCustomHandler started
[info] Test org.apache.spark.streaming.kinesis.JavaKinesisStreamSuite.testAwsCreds started
[info] Test org.apache.spark.streaming.kinesis.JavaKinesisStreamSuite.testKinesisStream started
[info] Test run finished: 0 failed, 0 ignored, 5 total, 0.891s
[info] Test run started
[info] Test org.apache.spark.streaming.kinesis.JavaKinesisInputDStreamBuilderSuite.testJavaKinesisDStreamBuilderOldApi started
[info] Test org.apache.spark.streaming.kinesis.JavaKinesisInputDStreamBuilderSuite.testJavaKinesisDStreamBuilder started
[info] Test run finished: 0 failed, 0 ignored, 2 total, 0.245s
[info] ScalaTest
[info] Run completed in 6 minutes, 5 seconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] Passed: Total 104, Failed 0, Errors 0, Passed 103, Skipped 1
[info] ScalaTest
[info] Run completed in 6 minutes, 6 seconds.
[info] Total number of tests run: 5
[info] Suites: completed 1, aborted 0
[info] Tests: succeeded 5, failed 0, canceled 0, ignored 0, pending 0
[info] All tests passed.
[info] Passed: Total 5, Failed 0, Errors 0, Passed 5
[info] ScalaTest
[info] Run completed in 6 minutes, 5 seconds.
[info] Total number of tests run: 85
[info] Suites: completed 8, aborted 0
[info] Tests: succeeded 85, failed 0, canceled 0, ignored 0, pending 0
[info] All tests passed.
[info] Passed: Total 85, Failed 0, Errors 0, Passed 85
[info] ScalaTest
[info] Run completed in 6 minutes, 5 seconds.
[info] Total number of tests run: 19
[info] Suites: completed 1, aborted 0
[info] Tests: succeeded 19, failed 0, canceled 0, ignored 1, pending 0
[info] All tests passed.
[info] Passed: Total 81, Failed 0, Errors 0, Passed 81, Ignored 1
[info] ScalaTest
[info] Run completed in 6 minutes, 0 seconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] - SPARK-17644: After one stage is aborted for too many failed attempts, subsequent stagesstill behave correctly on fetch failures (1 second, 464 milliseconds)
[info] - [SPARK-19263] DAGScheduler should not submit multiple active tasksets, even with late completions from earlier stage attempts (36 milliseconds)
[info] - task end event should have updated accumulators (SPARK-20342) (257 milliseconds)
[info] - Barrier task failures from the same stage attempt don't trigger multiple stage retries (14 milliseconds)
[info] - Barrier task failures from a previous stage attempt don't trigger stage retry (15 milliseconds)
[info] - SPARK-23207: retry all the succeeding stages when the map stage is indeterminate (17 milliseconds)
[info] - SPARK-29042: Sampled RDD with unordered input should be indeterminate (8 milliseconds)
[info] ExecutorClassLoaderSuite:
[info] - SPARK-23207: cannot rollback a result stage (11 milliseconds)
[info] - SPARK-23207: local checkpoint fail to rollback (checkpointed before) (43 milliseconds)
[info] - SPARK-23207: local checkpoint fail to rollback (checkpointing now) (11 milliseconds)
[info] - SPARK-23207: reliable checkpoint can avoid rollback (checkpointed before) (127 milliseconds)
[info] - SPARK-23207: reliable checkpoint fail to rollback (checkpointing now) (36 milliseconds)
[info] - SPARK-28699: abort stage if parent stage is indeterminate stage (13 milliseconds)
[info] PrefixComparatorsSuite:
[info] - String prefix comparator (199 milliseconds)
[info] - Binary prefix comparator (28 milliseconds)
[info] - double prefix comparator handles NaNs properly (1 millisecond)
[info] - double prefix comparator handles negative NaNs properly (0 milliseconds)
[info] - double prefix comparator handles other special values properly (0 milliseconds)
[info] MasterWebUISuite:
[info] - kill application (251 milliseconds)
[info] - kill driver (114 milliseconds)
[info] SorterSuite:
[info] - equivalent to Arrays.sort (88 milliseconds)
[info] - KVArraySorter (136 milliseconds)
[info] - child over system classloader (1 second, 400 milliseconds)
[info] - child first (90 milliseconds)
[info] - parent first (74 milliseconds)
[info] - child first can fall back (65 milliseconds)
[info] - child first can fail (55 milliseconds)
[info] - resource from parent (62 milliseconds)
[info] - resources from parent (55 milliseconds)
[info] - fetch classes using Spark's RpcEnv (335 milliseconds)
[info] ReplSuite:
[info] - block addition, and block to batch allocation with many blocks (15 seconds, 996 milliseconds)
[info] - recovery with write ahead logs should remove only allocated blocks from received queue (36 milliseconds)
[info] - block allocation to batch should not loose blocks from received queue (262 milliseconds)
[info] - recovery and cleanup with write ahead logs (102 milliseconds)
[info] - disable write ahead log when checkpoint directory is not set (1 millisecond)
[info] - parallel file deletion in FileBasedWriteAheadLog is robust to deletion error (49 milliseconds)
[info] WindowOperationsSuite:
[info] - window - basic window (1 second, 771 milliseconds)
[info] - window - tumbling window (763 milliseconds)
[info] - window - larger window (1 second, 56 milliseconds)
[info] - window - non-overlapping window (507 milliseconds)
[info] - window - persistence level (99 milliseconds)
[info] - run Spark in yarn-cluster mode *** FAILED *** (41 seconds, 62 milliseconds)
[info]   FAILED did not equal FINISHED (stdout/stderr was not captured) (BaseYarnClusterSuite.scala:201)
[info]   org.scalatest.exceptions.TestFailedException:
[info]   at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:528)
[info]   at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1560)
[info]   at org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:501)
[info]   at org.apache.spark.deploy.yarn.BaseYarnClusterSuite.checkResult(BaseYarnClusterSuite.scala:201)
[info]   at org.apache.spark.deploy.yarn.YarnClusterSuite.org$apache$spark$deploy$yarn$YarnClusterSuite$$testBasicYarnApp(YarnClusterSuite.scala:242)
[info]   at org.apache.spark.deploy.yarn.YarnClusterSuite$$anonfun$2.apply$mcV$sp(YarnClusterSuite.scala:88)
[info]   at org.apache.spark.deploy.yarn.YarnClusterSuite$$anonfun$2.apply(YarnClusterSuite.scala:88)
[info]   at org.apache.spark.deploy.yarn.YarnClusterSuite$$anonfun$2.apply(YarnClusterSuite.scala:88)
[info]   at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[info]   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info]   at org.scalatest.Transformer.apply(Transformer.scala:22)
[info]   at org.scalatest.Transformer.apply(Transformer.scala:20)
[info]   at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
[info]   at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:147)
[info]   at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:183)
[info]   at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:196)
[info]   at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:196)
[info]   at org.scalatest.SuperEngine.runTestImpl(Engine.scala:289)
[info]   at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:196)
[info]   at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:54)
[info]   at org.scalatest.BeforeAndAfterEach$class.runTest(BeforeAndAfterEach.scala:221)
[info]   at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:54)
[info]   at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:229)
[info]   at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:229)
[info]   at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:396)
[info]   at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:384)
[info]   at scala.collection.immutable.List.foreach(List.scala:392)
[info]   at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:384)
[info]   at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:379)
[info]   at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:461)
[info]   at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:229)
[info]   at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
[info]   at org.scalatest.Suite$class.run(Suite.scala:1147)
[info]   at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
[info]   at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:233)
[info]   at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:233)
[info]   at org.scalatest.SuperEngine.runImpl(Engine.scala:521)
[info]   at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:233)
[info]   at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:54)
[info]   at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:213)
[info]   at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:210)
[info]   at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:54)
[info]   at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:314)
[info]   at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:480)
[info]   at sbt.ForkMain$Run$2.call(ForkMain.java:296)
[info]   at sbt.ForkMain$Run$2.call(ForkMain.java:286)
[info]   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info]   at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[info]   at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[info]   at java.lang.Thread.run(Thread.java:748)
[info] - reduceByKeyAndWindow - basic reduction (909 milliseconds)
[info] - propagation of local properties (10 seconds, 285 milliseconds)
[info] - reduceByKeyAndWindow - key already in window and new value added into window (435 milliseconds)
[info] - reduceByKeyAndWindow - new key added into window (408 milliseconds)
[info] - reduceByKeyAndWindow - key removed from window (642 milliseconds)
[info] - reduceByKeyAndWindow - larger slide time (676 milliseconds)
[info] - reduceByKeyAndWindow - big test (1 second, 196 milliseconds)
[info] - reduceByKeyAndWindow with inverse function - basic reduction (904 milliseconds)
[info] - reduceByKeyAndWindow with inverse function - key already in window and new value added into window (803 milliseconds)
Spark context available as 'sc' (master = local, app id = local-1594476044203).
Spark session available as 'spark'.
[info] - reduceByKeyAndWindow with inverse function - new key added into window (542 milliseconds)
[info] - reduceByKeyAndWindow with inverse function - key removed from window (641 milliseconds)
[info] - reduceByKeyAndWindow with inverse function - larger slide time (802 milliseconds)
[info] - reduceByKeyAndWindow with inverse function - big test (1 second, 257 milliseconds)
[info] - reduceByKeyAndWindow with inverse and filter functions - big test (1 second, 88 milliseconds)
[info] - groupByKeyAndWindow (772 milliseconds)
[info] - countByWindow (1 second, 320 milliseconds)
[info] - countByValueAndWindow (471 milliseconds)
[info] StreamingListenerSuite:
[info] - batch info reporting (713 milliseconds)
[info] - receiver info reporting (376 milliseconds)
[info] - output operation reporting (617 milliseconds)
[info] - don't call ssc.stop in listener (626 milliseconds)
[info] - onBatchCompleted with successful batch (1 second, 410 milliseconds)
[info] - SPARK-5984 TimSort bug (28 seconds, 263 milliseconds)
[info] - Sorter benchmark for key-value pairs !!! IGNORED !!!
[info] - Sorter benchmark for primitive int array !!! IGNORED !!!
[info] RandomSamplerSuite:
[info] - utilities (27 milliseconds)
[info] - sanity check medianKSD against references (552 milliseconds)
[info] - bernoulli sampling (283 milliseconds)
[info] - bernoulli sampling without iterator (145 milliseconds)
[info] - bernoulli sampling with gap sampling optimization (283 milliseconds)
[info] - onBatchCompleted with failed batch and one failed job (1 second, 851 milliseconds)
[info] - SPARK-15236: use Hive catalog (17 seconds, 563 milliseconds)
[info] - bernoulli sampling (without iterator) with gap sampling optimization (478 milliseconds)
[info] - bernoulli boundary cases (13 milliseconds)
[info] - bernoulli (without iterator) boundary cases (20 milliseconds)
[info] - bernoulli data types (209 milliseconds)
[info] - bernoulli clone (41 milliseconds)
[info] - bernoulli set seed (68 milliseconds)
[info] - replacement sampling (88 milliseconds)
[info] - onBatchCompleted with failed batch and multiple failed jobs (711 milliseconds)
[info] - replacement sampling without iterator (148 milliseconds)
[info] - replacement sampling with gap sampling (215 milliseconds)
[info] - replacement sampling (without iterator) with gap sampling (260 milliseconds)
[info] - replacement boundary cases (1 millisecond)
[info] - replacement (without) boundary cases (2 milliseconds)
[info] - replacement data types (163 milliseconds)
[info] - StreamingListener receives no events after stopping StreamingListenerBus (798 milliseconds)
[info] ReceiverInputDStreamSuite:
[info] - replacement clone (60 milliseconds)
[info] - replacement set seed (92 milliseconds)
[info] - bernoulli partitioning sampling (40 milliseconds)
[info] - bernoulli partitioning sampling without iterator (38 milliseconds)
[info] - bernoulli partitioning boundary cases (2 milliseconds)
[info] - bernoulli partitioning (without iterator) boundary cases (5 milliseconds)
[info] - bernoulli partitioning data (1 millisecond)
[info] - bernoulli partitioning clone (0 milliseconds)
[info] PoolSuite:
[info] - Without WAL enabled: createBlockRDD creates empty BlockRDD when no block info (223 milliseconds)
[info] - FIFO Scheduler Test (242 milliseconds)
[info] - Without WAL enabled: createBlockRDD creates correct BlockRDD with block info (357 milliseconds)
[info] - Fair Scheduler Test (107 milliseconds)
[info] - Without WAL enabled: createBlockRDD filters non-existent blocks before creating BlockRDD (118 milliseconds)
[info] - Nested Pool Test (74 milliseconds)
[info] - SPARK-17663: FairSchedulableBuilder sets default values for blank or invalid datas (12 milliseconds)
[info] - FIFO scheduler uses root pool and not spark.scheduler.pool property (110 milliseconds)
[info] - With WAL enabled: createBlockRDD creates empty WALBackedBlockRDD when no block info (237 milliseconds)
[info] - FAIR Scheduler uses default pool when spark.scheduler.pool property is not set (84 milliseconds)
[info] - FAIR Scheduler creates a new pool when spark.scheduler.pool property points to a non-existent pool (483 milliseconds)
[info] - Pool should throw IllegalArgumentException when schedulingMode is not supported (2 milliseconds)
[info] - With WAL enabled: createBlockRDD creates correct WALBackedBlockRDD with all block info having WAL info (716 milliseconds)
[info] - Fair Scheduler should build fair scheduler when valid spark.scheduler.allocation.file property is set (220 milliseconds)
[info] - With WAL enabled: createBlockRDD creates BlockRDD when some block info don't have WAL info (130 milliseconds)
[info] WriteAheadLogBackedBlockRDDSuite:
[info] - Fair Scheduler should use default file(fairscheduler.xml) if it exists in classpath and spark.scheduler.allocation.file property is not set (307 milliseconds)
[info] - Read data available in both block manager and write ahead log (310 milliseconds)
[info] - Fair Scheduler should throw FileNotFoundException when invalid spark.scheduler.allocation.file property is set (265 milliseconds)
[info] DiskStoreSuite:
Spark context available as 'sc' (master = local, app id = local-1594476060074).
Spark session available as 'spark'.
[info] - Read data available only in block manager, not in write ahead log (227 milliseconds)
[info] - reads of memory-mapped and non memory-mapped files are equivalent (57 milliseconds)
[info] - block size tracking (44 milliseconds)
[info] - blocks larger than 2gb (61 milliseconds)
[info] - Read data available only in write ahead log, not in block manager (156 milliseconds)
[info] - block data encryption (79 milliseconds)
[info] BlockManagerReplicationSuite:
[info] - Read data with partially available in block manager, and rest in write ahead log (91 milliseconds)
[info] - Test isBlockValid skips block fetching from BlockManager (267 milliseconds)
[info] - Test whether RDD is valid after removing blocks from block manager (228 milliseconds)
[info] - get peers with addition and removal of block managers (40 milliseconds)
[info] - Test storing of blocks recovered from write ahead log back into block manager (243 milliseconds)
Exception in thread "block-manager-slave-async-thread-pool-7" java.lang.Error: java.lang.InterruptedException
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1155)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.InterruptedException
	at java.lang.Object.wait(Native Method)
	at java.lang.Object.wait(Object.java:502)
	at org.apache.spark.storage.BlockInfoManager.lockForWriting(BlockInfoManager.scala:236)
	at org.apache.spark.storage.BlockManager.removeBlock(BlockManager.scala:1571)
	at org.apache.spark.storage.BlockManagerSlaveEndpoint$$anonfun$receiveAndReply$1$$anonfun$applyOrElse$1.apply$mcZ$sp(BlockManagerSlaveEndpoint.scala:47)
	at org.apache.spark.storage.BlockManagerSlaveEndpoint$$anonfun$receiveAndReply$1$$anonfun$applyOrElse$1.apply(BlockManagerSlaveEndpoint.scala:46)
	at org.apache.spark.storage.BlockManagerSlaveEndpoint$$anonfun$receiveAndReply$1$$anonfun$applyOrElse$1.apply(BlockManagerSlaveEndpoint.scala:46)
	at org.apache.spark.storage.BlockManagerSlaveEndpoint$$anonfun$1.apply(BlockManagerSlaveEndpoint.scala:86)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	... 2 more
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@3c4e0370 rejected from java.util.concurrent.ThreadPoolExecutor@3289fa89[Shutting down, pool size = 4, active threads = 4, queued tasks = 0, completed tasks = 7]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.Promise$class.failure(Promise.scala:104)
	at scala.concurrent.impl.Promise$DefaultPromise.failure(Promise.scala:157)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:194)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:192)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
	at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@d44bbcd rejected from java.util.concurrent.ThreadPoolExecutor@3289fa89[Shutting down, pool size = 4, active threads = 4, queued tasks = 0, completed tasks = 7]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.Promise$class.failure(Promise.scala:104)
	at scala.concurrent.impl.Promise$DefaultPromise.failure(Promise.scala:157)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:194)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:192)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
	at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@6778ebfe rejected from java.util.concurrent.ThreadPoolExecutor@3289fa89[Shutting down, pool size = 4, active threads = 4, queued tasks = 0, completed tasks = 7]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.Promise$class.failure(Promise.scala:104)
	at scala.concurrent.impl.Promise$DefaultPromise.failure(Promise.scala:157)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:194)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:192)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
	at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@a561013 rejected from java.util.concurrent.ThreadPoolExecutor@3289fa89[Shutting down, pool size = 4, active threads = 4, queued tasks = 0, completed tasks = 7]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.Promise$class.failure(Promise.scala:104)
	at scala.concurrent.impl.Promise$DefaultPromise.failure(Promise.scala:157)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:194)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:192)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
	at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@6d80808e rejected from java.util.concurrent.ThreadPoolExecutor@3289fa89[Shutting down, pool size = 4, active threads = 4, queued tasks = 0, completed tasks = 7]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@7bcd0bd rejected from java.util.concurrent.ThreadPoolExecutor@3289fa89[Shutting down, pool size = 4, active threads = 4, queued tasks = 0, completed tasks = 7]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@73b4f0b2 rejected from java.util.concurrent.ThreadPoolExecutor@3289fa89[Shutting down, pool size = 4, active threads = 4, queued tasks = 0, completed tasks = 7]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@3cd280d7 rejected from java.util.concurrent.ThreadPoolExecutor@3289fa89[Shutting down, pool size = 4, active threads = 4, queued tasks = 0, completed tasks = 7]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
[info] - read data in block manager and WAL with encryption on (351 milliseconds)
[info] TimeSuite:
[info] - less (1 millisecond)
[info] - lessEq (1 millisecond)
[info] - greater (1 millisecond)
[info] - greaterEq (1 millisecond)
[info] - plus (0 milliseconds)
[info] - minus Time (1 millisecond)
[info] - minus Duration (0 milliseconds)
[info] - floor (1 millisecond)
[info] - isMultipleOf (0 milliseconds)
[info] - min (1 millisecond)
[info] - max (1 millisecond)
[info] - until (2 milliseconds)
[info] - to (2 milliseconds)
[info] DStreamScopeSuite:
[info] - dstream without scope (2 milliseconds)
[info] - input dstream without scope (4 milliseconds)
[info] - scoping simple operations (14 milliseconds)
[info] - scoping nested operations (34 milliseconds)
[info] - transform should allow RDD operations to be captured in scopes (20 milliseconds)
[info] - foreachRDD should allow RDD operations to be captured in scope (45 milliseconds)
[info] ReceiverSuite:
[info] - receiver life cycle (359 milliseconds)
[info] - block generator throttling !!! IGNORED !!!
[info] - block replication - 2x replication (1 second, 211 milliseconds)
[info] - SPARK-15236: use in-memory catalog (6 seconds, 923 milliseconds)
[info] - block replication - 3x replication (2 seconds, 379 milliseconds)
Spark context available as 'sc' (master = local, app id = local-1594476066560).
Spark session available as 'spark'.
[info] - block replication - mixed between 1x to 5x (3 seconds, 191 milliseconds)
[info] - block replication - off-heap (403 milliseconds)
[info] - block replication - 2x replication without peers (2 milliseconds)
[info] - block replication - replication failures (100 milliseconds)
[info] - broadcast vars (9 seconds, 434 milliseconds)
[info] - block replication - addition and deletion of block managers (574 milliseconds)
[info] BlockManagerProactiveReplicationSuite:
[info] - get peers with addition and removal of block managers (33 milliseconds)
Spark context available as 'sc' (master = local, app id = local-1594476074960).
Spark session available as 'spark'.
[info] - block replication - 2x replication (1 second, 87 milliseconds)
[info] - write ahead log - generating and cleaning (13 seconds, 398 milliseconds)
[info] StateMapSuite:
[info] - EmptyStateMap (2 milliseconds)
[info] - OpenHashMapBasedStateMap - put, get, getByTime, getAll, remove (20 milliseconds)
[info] - OpenHashMapBasedStateMap - put, get, getByTime, getAll, remove with copy (3 milliseconds)
[info] - OpenHashMapBasedStateMap - serializing and deserializing (131 milliseconds)
[info] - OpenHashMapBasedStateMap - serializing and deserializing with compaction (15 milliseconds)
[info] - block replication - 3x replication (2 seconds, 598 milliseconds)
[info] - run Spark in yarn-client mode with different configurations, ensuring redaction (41 seconds, 133 milliseconds)
[info] - line wrapper only initialized once when used as encoder outer scope (7 seconds, 758 milliseconds)
[info] - block replication - mixed between 1x to 5x (2 seconds, 948 milliseconds)
[info] - block replication - off-heap (384 milliseconds)
[info] - block replication - 2x replication without peers (1 millisecond)
[info] - block replication - replication failures (111 milliseconds)
[info] - block replication - addition and deletion of block managers (340 milliseconds)
[info] - proactive block replication - 2 replicas - 1 block manager deletions (895 milliseconds)
Spark context available as 'sc' (master = local-cluster[1,1,1024], app id = app-20200711070123-0000).
Spark session available as 'spark'.
[info] - OpenHashMapBasedStateMap - all possible sequences of operations with copies  (9 seconds, 631 milliseconds)
[info] - OpenHashMapBasedStateMap - serializing and deserializing with KryoSerializable states (41 milliseconds)
[info] - EmptyStateMap - serializing and deserializing (32 milliseconds)
[info] - MapWithStateRDDRecord - serializing and deserializing with KryoSerializable states (47 milliseconds)
[info] UIUtilsSuite:
[info] - shortTimeUnitString (2 milliseconds)
[info] - normalizeDuration (8 milliseconds)
[info] - convertToTimeUnit (0 milliseconds)
[info] - formatBatchTime (2 milliseconds)
[info] DurationSuite:
[info] - less (0 milliseconds)
[info] - lessEq (0 milliseconds)
[info] - greater (0 milliseconds)
[info] - greaterEq (0 milliseconds)
[info] - plus (0 milliseconds)
[info] - minus (1 millisecond)
[info] - times (0 milliseconds)
[info] - div (1 millisecond)
[info] - isMultipleOf (0 milliseconds)
[info] - min (0 milliseconds)
[info] - max (1 millisecond)
[info] - isZero (1 millisecond)
[info] - Milliseconds (0 milliseconds)
[info] - Seconds (1 millisecond)
[info] - Minutes (1 millisecond)
[info] MapWithStateRDDSuite:
[info] - proactive block replication - 3 replicas - 2 block manager deletions (327 milliseconds)
[info] - creation from pair RDD (601 milliseconds)
[info] - proactive block replication - 4 replicas - 3 block manager deletions (386 milliseconds)
[info] - updating state and generating mapped data in MapWithStateRDDRecord (12 milliseconds)

// Exiting paste mode, now interpreting.

[info] - proactive block replication - 5 replicas - 4 block manager deletions (1 second, 724 milliseconds)
[info] BlockManagerBasicStrategyReplicationSuite:
[info] - states generated by MapWithStateRDD (2 seconds, 474 milliseconds)
[info] - get peers with addition and removal of block managers (50 milliseconds)
[info] - block replication - 2x replication (921 milliseconds)
[info] - checkpointing (3 seconds, 679 milliseconds)
[info] - define case class and create Dataset together with paste mode (12 seconds, 372 milliseconds)
[info] - block replication - 3x replication (2 seconds, 220 milliseconds)
[info] - checkpointing empty state RDD (954 milliseconds)
[info] DStreamClosureSuite:
[info] - user provided closures are actually cleaned (157 milliseconds)
[info] UISeleniumSuite:
Spark context available as 'sc' (master = local, app id = local-1594476095129).
Spark session available as 'spark'.
[info] - block replication - mixed between 1x to 5x (3 seconds, 466 milliseconds)
[info] - block replication - off-heap (766 milliseconds)
Spark context available as 'sc' (master = local, app id = local-1594476095129).
Spark session available as 'spark'.
[info] - block replication - 2x replication without peers (0 milliseconds)
[info] - attaching and detaching a Streaming tab (4 seconds, 262 milliseconds)
[info] FileBasedWriteAheadLogSuite:
[info] - block replication - replication failures (164 milliseconds)
[info] - FileBasedWriteAheadLog - read all logs (92 milliseconds)
[info] - FileBasedWriteAheadLog - write logs (32 milliseconds)
[info] - FileBasedWriteAheadLog - read all logs after write (30 milliseconds)
[info] - FileBasedWriteAheadLog - clean old logs (187 milliseconds)
[info] - FileBasedWriteAheadLog - clean old logs synchronously (39 milliseconds)
[info] - FileBasedWriteAheadLog - handling file errors while reading rotating logs (74 milliseconds)
[info] - FileBasedWriteAheadLog - do not create directories or files unless write (5 milliseconds)
[info] - FileBasedWriteAheadLog - parallel recovery not enabled if closeFileAfterWrite = false (21 milliseconds)
[info] - FileBasedWriteAheadLog - seqToParIterator (159 milliseconds)
[info] - FileBasedWriteAheadLogWriter - writing data (16 milliseconds)
[info] - FileBasedWriteAheadLogWriter - syncing of data by writing and reading immediately (19 milliseconds)
[info] - FileBasedWriteAheadLogReader - sequentially reading data (3 milliseconds)
[info] - FileBasedWriteAheadLogReader - sequentially reading data written with writer (5 milliseconds)
[info] - :replay should work correctly (8 seconds, 424 milliseconds)
[info] - block replication - addition and deletion of block managers (646 milliseconds)
[info] FlatmapIteratorSuite:
[info] - Flatmap Iterator to Disk (129 milliseconds)
[info] - Flatmap Iterator to Memory (127 milliseconds)
[info] - Serializer Reset (539 milliseconds)
[info] RDDSuite:
Spark context available as 'sc' (master = local, app id = local-1594476103430).
Spark session available as 'spark'.
[info] - basic operations (1 second, 45 milliseconds)
[info] - serialization (3 milliseconds)
[info] - countApproxDistinct (1 second, 32 milliseconds)
[info] - SparkContext.union (94 milliseconds)
[info] - SparkContext.union parallel partition listing (165 milliseconds)
[info] - SparkContext.union creates UnionRDD if at least one RDD has no partitioner (4 milliseconds)
[info] - SparkContext.union creates PartitionAwareUnionRDD if all RDDs have partitioners (5 milliseconds)
[info] - PartitionAwareUnionRDD raises exception if at least one RDD has no partitioner (3 milliseconds)
[info] - SPARK-23778: empty RDD in union should not produce a UnionRDD (12 milliseconds)
[info] - partitioner aware union (309 milliseconds)
[info] - UnionRDD partition serialized size should be small (48 milliseconds)
[info] - fold (48 milliseconds)
[info] - fold with op modifying first arg (23 milliseconds)
[info] - aggregate (27 milliseconds)
[info] - spark-shell should find imported types in class constructors and extends clause (4 seconds, 891 milliseconds)
[info] - treeAggregate (1 second, 286 milliseconds)
[info] - treeAggregate with ops modifying first args (920 milliseconds)
[info] - treeReduce (374 milliseconds)
Spark context available as 'sc' (master = local, app id = local-1594476108364).
Spark session available as 'spark'.
[info] - basic caching (36 milliseconds)
[info] - caching with failures (18 milliseconds)
[info] - empty RDD (179 milliseconds)
[info] - repartitioned RDDs (2 seconds, 220 milliseconds)
[info] - FileBasedWriteAheadLogReader - reading data written with writer after corrupted write (12 seconds, 249 milliseconds)
[info] - FileBasedWriteAheadLogReader - handles errors when file doesn't exist (2 milliseconds)
[info] - FileBasedWriteAheadLogRandomReader - reading data using random reader (18 milliseconds)
[info] - FileBasedWriteAheadLogRandomReader- reading data using random reader written with writer (36 milliseconds)
[info] FileBasedWriteAheadLogWithFileCloseAfterWriteSuite:
[info] - FileBasedWriteAheadLog - read all logs (54 milliseconds)
[info] - FileBasedWriteAheadLog - write logs (45 milliseconds)
[info] - FileBasedWriteAheadLog - read all logs after write (110 milliseconds)
[info] - FileBasedWriteAheadLog - clean old logs (70 milliseconds)
[info] - FileBasedWriteAheadLog - clean old logs synchronously (64 milliseconds)
[info] - FileBasedWriteAheadLog - handling file errors while reading rotating logs (852 milliseconds)
[info] - FileBasedWriteAheadLog - do not create directories or files unless write (2 milliseconds)
[info] - FileBasedWriteAheadLog - parallel recovery not enabled if closeFileAfterWrite = false (8 milliseconds)
[info] - FileBasedWriteAheadLog - close after write flag (5 milliseconds)
[info] BatchedWriteAheadLogSuite:
[info] - BatchedWriteAheadLog - read all logs (31 milliseconds)
Spark context available as 'sc' (master = local, app id = local-1594476114702).
Spark session available as 'spark'.
[info] - BatchedWriteAheadLog - write logs (33 milliseconds)
[info] - BatchedWriteAheadLog - read all logs after write (29 milliseconds)
[info] - BatchedWriteAheadLog - clean old logs (25 milliseconds)
[info] - BatchedWriteAheadLog - clean old logs synchronously (24 milliseconds)
[info] - BatchedWriteAheadLog - handling file errors while reading rotating logs (63 milliseconds)
[info] - BatchedWriteAheadLog - do not create directories or files unless write (5 milliseconds)
[info] - BatchedWriteAheadLog - parallel recovery not enabled if closeFileAfterWrite = false (12 milliseconds)
[info] - BatchedWriteAheadLog - serializing and deserializing batched records (4 milliseconds)
[info] - BatchedWriteAheadLog - failures in wrappedLog get bubbled up (32 milliseconds)
[info] - BatchedWriteAheadLog - name log with the highest timestamp of aggregated entries (23 milliseconds)
[info] - BatchedWriteAheadLog - shutdown properly (3 milliseconds)
[info] - BatchedWriteAheadLog - fail everything in queue during shutdown (7 milliseconds)
[info] BatchedWriteAheadLogWithCloseFileAfterWriteSuite:
[info] - BatchedWriteAheadLog - read all logs (29 milliseconds)
[info] - BatchedWriteAheadLog - write logs (87 milliseconds)
[info] - BatchedWriteAheadLog - read all logs after write (98 milliseconds)
[info] - BatchedWriteAheadLog - clean old logs (57 milliseconds)
[info] - BatchedWriteAheadLog - clean old logs synchronously (43 milliseconds)
[info] - BatchedWriteAheadLog - handling file errors while reading rotating logs (265 milliseconds)
[info] - BatchedWriteAheadLog - do not create directories or files unless write (2 milliseconds)
[info] - BatchedWriteAheadLog - parallel recovery not enabled if closeFileAfterWrite = false (8 milliseconds)
[info] - BatchedWriteAheadLog - close after write flag (4 milliseconds)
[info] CheckpointSuite:
[info] - non-existent checkpoint dir (3 milliseconds)
[info] - spark-shell should shadow val/def definitions correctly (10 seconds, 648 milliseconds)
Spark context available as 'sc' (master = local-cluster[1,1,1024], app id = app-20200711070200-0000).
Spark session available as 'spark'.

// Exiting paste mode, now interpreting.

[info] - basic rdd checkpoints + dstream graph checkpoint recovery (9 seconds, 435 milliseconds)
[info] - recovery of conf through checkpoints (336 milliseconds)
[info] - repartitioned RDDs perform load balancing (14 seconds, 930 milliseconds)
[info] - get correct spark.driver.[host|port] from checkpoint (274 milliseconds)
[info] - SPARK-26633: ExecutorClassLoader.getResourceAsStream find REPL classes (9 seconds, 34 milliseconds)
[info] SingletonReplSuite:
[info] - coalesced RDDs (222 milliseconds)
[info] - SPARK-30199 get ui port and blockmanager port (207 milliseconds)
[info] - coalesced RDDs with locality (60 milliseconds)
[info] - coalesced RDDs with partial locality (49 milliseconds)
-------------------------------------------
Time: 500 ms
-------------------------------------------
(a,2)
(b,1)

-------------------------------------------
Time: 1000 ms
-------------------------------------------
(,2)

-------------------------------------------
Time: 1500 ms
-------------------------------------------

-------------------------------------------
Time: 1500 ms
-------------------------------------------

-------------------------------------------
Time: 2000 ms
-------------------------------------------
(a,2)
(b,1)

-------------------------------------------
Time: 2500 ms
-------------------------------------------
(,2)

-------------------------------------------
Time: 3000 ms
-------------------------------------------

[info] - recovery with map and reduceByKey operations (960 milliseconds)
-------------------------------------------
Time: 500 ms
-------------------------------------------
(a,1)

-------------------------------------------
Time: 1000 ms
-------------------------------------------
(a,2)

-------------------------------------------
Time: 1500 ms
-------------------------------------------
(a,3)

-------------------------------------------
Time: 2000 ms
-------------------------------------------
(a,4)

-------------------------------------------
Time: 2500 ms
-------------------------------------------
(a,4)

-------------------------------------------
Time: 3000 ms
-------------------------------------------
(a,4)

-------------------------------------------
Time: 3500 ms
-------------------------------------------
(a,4)

-------------------------------------------
Time: 3500 ms
-------------------------------------------
(a,4)

[info] - coalesced RDDs with locality, large scale (10K partitions) (1 second, 922 milliseconds)
-------------------------------------------
Time: 4000 ms
-------------------------------------------
(a,4)

-------------------------------------------
Time: 4500 ms
-------------------------------------------
(a,4)

Spark context available as 'sc' (master = local-cluster[2,1,1024], app id = app-20200711070208-0000).
Spark session available as 'spark'.
-------------------------------------------
Time: 5000 ms
-------------------------------------------
(a,4)

[info] - recovery with invertible reduceByKeyAndWindow operation (1 second, 461 milliseconds)
[info] - coalesced RDDs with partial locality, large scale (10K partitions) (813 milliseconds)
[info] - coalesced RDDs with locality, fail first pass (15 milliseconds)
-------------------------------------------
Time: 500 ms
-------------------------------------------
(a,2)
(b,1)

[info] - zipped RDDs (68 milliseconds)
[info] - partition pruning (24 milliseconds)
-------------------------------------------
Time: 1000 ms
-------------------------------------------
(,2)

-------------------------------------------
Time: 1500 ms
-------------------------------------------

-------------------------------------------
Time: 1500 ms
-------------------------------------------

-------------------------------------------
Time: 2000 ms
-------------------------------------------
(a,2)
(b,1)

-------------------------------------------
Time: 2500 ms
-------------------------------------------
(,2)

-------------------------------------------
Time: 3000 ms
-------------------------------------------

[info] - recovery with saveAsHadoopFiles operation (2 seconds, 213 milliseconds)
-------------------------------------------
Time: 500 ms
-------------------------------------------
(a,2)
(b,1)

-------------------------------------------
Time: 1000 ms
-------------------------------------------
(,2)

-------------------------------------------
Time: 1500 ms
-------------------------------------------

-------------------------------------------
Time: 1500 ms
-------------------------------------------

-------------------------------------------
Time: 2000 ms
-------------------------------------------
(a,2)
(b,1)

-------------------------------------------
Time: 2500 ms
-------------------------------------------
(,2)

-------------------------------------------
Time: 3000 ms
-------------------------------------------

[info] - recovery with saveAsNewAPIHadoopFiles operation (1 second, 818 milliseconds)
-------------------------------------------
Time: 500 ms
-------------------------------------------
(b,1)
(a,2)

-------------------------------------------
Time: 1000 ms
-------------------------------------------
(,2)

-------------------------------------------
Time: 1500 ms
-------------------------------------------

-------------------------------------------
Time: 1500 ms
-------------------------------------------

-------------------------------------------
Time: 2000 ms
-------------------------------------------
(b,1)
(a,2)

[info] - simple foreach with accumulator (4 seconds, 511 milliseconds)
-------------------------------------------
Time: 2500 ms
-------------------------------------------
(,2)

-------------------------------------------
Time: 3000 ms
-------------------------------------------

[info] - recovery with saveAsHadoopFile inside transform operation (2 seconds, 749 milliseconds)
-------------------------------------------
Time: 500 ms
-------------------------------------------
(a,1)

-------------------------------------------
Time: 1000 ms
-------------------------------------------
(a,2)

-------------------------------------------
Time: 1500 ms
-------------------------------------------
(a,3)

-------------------------------------------
Time: 2000 ms
-------------------------------------------
(a,4)

-------------------------------------------
Time: 2500 ms
-------------------------------------------
(a,5)

-------------------------------------------
Time: 3000 ms
-------------------------------------------
(a,6)

-------------------------------------------
Time: 3500 ms
-------------------------------------------
(a,7)

-------------------------------------------
Time: 3500 ms
-------------------------------------------
(a,7)

-------------------------------------------
Time: 4000 ms
-------------------------------------------
(a,8)

-------------------------------------------
Time: 4500 ms
-------------------------------------------
(a,9)

-------------------------------------------
Time: 5000 ms
-------------------------------------------
(a,10)

[info] - external vars (2 seconds, 884 milliseconds)
[info] - recovery with updateStateByKey operation (2 seconds, 164 milliseconds)
[info] - external classes (1 second, 514 milliseconds)
[info] - external functions (1 second, 5 milliseconds)
[info] - recovery maintains rate controller (3 seconds, 331 milliseconds)
[info] - external functions that access vars (2 seconds, 506 milliseconds)
[info] - collect large number of empty partitions (13 seconds, 694 milliseconds)
Exception in thread "streaming-job-executor-0" java.lang.Error: java.lang.InterruptedException
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1155)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.InterruptedException
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
	at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:206)
	at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:222)
	at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:157)
	at org.apache.spark.util.ThreadUtils$.awaitReady(ThreadUtils.scala:243)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:729)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
	at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:990)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
	at org.apache.spark.rdd.RDD.withScope(RDD.scala:385)
	at org.apache.spark.rdd.RDD.collect(RDD.scala:989)
	at org.apache.spark.streaming.TestOutputStream$$anonfun$$lessinit$greater$1.apply(TestSuiteBase.scala:100)
	at org.apache.spark.streaming.TestOutputStream$$anonfun$$lessinit$greater$1.apply(TestSuiteBase.scala:99)
	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:51)
	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51)
	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51)
	at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416)
	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:50)
	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
	at scala.util.Try$.apply(Try.scala:192)
	at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39)
	at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:257)
	at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257)
	at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257)
	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
	at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:256)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	... 2 more
[info] - take (2 seconds, 341 milliseconds)
[info] - top with predefined ordering (97 milliseconds)
[info] - top with custom ordering (14 milliseconds)
[info] - takeOrdered with predefined ordering (15 milliseconds)
[info] - takeOrdered with limit 0 (1 millisecond)
[info] - takeOrdered with custom ordering (13 milliseconds)
[info] - broadcast vars (2 seconds, 715 milliseconds)
[info] - isEmpty (109 milliseconds)
[info] - sample preserves partitioner (2 milliseconds)
[info] - run Spark in yarn-cluster mode with different configurations, ensuring redaction *** FAILED *** (1 minute, 6 seconds)
[info]   FAILED did not equal FINISHED (stdout/stderr was not captured) (BaseYarnClusterSuite.scala:201)
[info]   org.scalatest.exceptions.TestFailedException:
[info]   at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:528)
[info]   at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1560)
[info]   at org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:501)
[info]   at org.apache.spark.deploy.yarn.BaseYarnClusterSuite.checkResult(BaseYarnClusterSuite.scala:201)
[info] - recovery with file input stream (4 seconds, 657 milliseconds)
[info]   at org.apache.spark.deploy.yarn.YarnClusterSuite.org$apache$spark$deploy$yarn$YarnClusterSuite$$testBasicYarnApp(YarnClusterSuite.scala:242)
[info]   at org.apache.spark.deploy.yarn.YarnClusterSuite$$anonfun$4.apply$mcV$sp(YarnClusterSuite.scala:105)
[info]   at org.apache.spark.deploy.yarn.YarnClusterSuite$$anonfun$4.apply(YarnClusterSuite.scala:105)
[info]   at org.apache.spark.deploy.yarn.YarnClusterSuite$$anonfun$4.apply(YarnClusterSuite.scala:105)
[info]   at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[info]   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info]   at org.scalatest.Transformer.apply(Transformer.scala:22)
[info]   at org.scalatest.Transformer.apply(Transformer.scala:20)
[info]   at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
[info]   at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:147)
[info]   at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:183)
[info]   at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:196)
[info]   at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:196)
[info]   at org.scalatest.SuperEngine.runTestImpl(Engine.scala:289)
[info]   at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:196)
[info]   at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:54)
[info]   at org.scalatest.BeforeAndAfterEach$class.runTest(BeforeAndAfterEach.scala:221)
[info]   at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:54)
[info]   at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:229)
[info]   at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:229)
[info]   at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:396)
[info]   at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:384)
[info]   at scala.collection.immutable.List.foreach(List.scala:392)
[info]   at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:384)
[info]   at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:379)
[info]   at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:461)
[info]   at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:229)
[info]   at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
[info]   at org.scalatest.Suite$class.run(Suite.scala:1147)
[info]   at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
[info]   at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:233)
[info]   at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:233)
[info]   at org.scalatest.SuperEngine.runImpl(Engine.scala:521)
[info]   at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:233)
[info]   at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:54)
[info]   at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:213)
[info]   at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:210)
[info]   at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:54)
[info]   at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:314)
[info]   at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:480)
[info]   at sbt.ForkMain$Run$2.call(ForkMain.java:296)
[info]   at sbt.ForkMain$Run$2.call(ForkMain.java:286)
[info]   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info]   at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[info]   at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[info]   at java.lang.Thread.run(Thread.java:748)
[info] - DStreamCheckpointData.restore invoking times (480 milliseconds)
[info] - recovery from checkpoint contains array object (1 second, 114 milliseconds)
[info] - SPARK-11267: the race condition of two checkpoints in a batch (57 milliseconds)
[info] - interacting with files (2 seconds, 30 milliseconds)
[info] - SPARK-28912: Fix MatchError in getCheckpointFiles (22 milliseconds)
[info] - SPARK-6847: stack overflow when updateStateByKey is followed by a checkpointed dstream (560 milliseconds)
[info] MapWithStateSuite:
[info] - state - get, exists, update, remove,  (4 milliseconds)
[info] - mapWithState - basic operations with simple API (638 milliseconds)
[info] - mapWithState - basic operations with advanced API (475 milliseconds)
[info] - mapWithState - type inferencing and class tags (7 milliseconds)
[info] - mapWithState - states as mapped data (388 milliseconds)
[info] - mapWithState - initial states, with nothing returned as from mapping function (471 milliseconds)
[info] - local-cluster mode (3 seconds, 9 milliseconds)
[info] - mapWithState - state removing (587 milliseconds)
[info] - SPARK-1199 two instances of same class don't type check. (1 second, 541 milliseconds)
[info] - mapWithState - state timing out (1 second, 404 milliseconds)
[info] - mapWithState - checkpoint durations (89 milliseconds)
-------------------------------------------
Time: 1000 ms
-------------------------------------------

[info] - SPARK-2452 compound statements. (455 milliseconds)
-------------------------------------------
Time: 2000 ms
-------------------------------------------
(a,1)

-------------------------------------------
Time: 3000 ms
-------------------------------------------
(a,2)
(b,1)

-------------------------------------------
Time: 3000 ms
-------------------------------------------
(a,2)
(b,1)

-------------------------------------------
Time: 4000 ms
-------------------------------------------
(a,3)
(b,2)
(c,1)

-------------------------------------------
Time: 5000 ms
-------------------------------------------
(c,1)
(a,4)
(b,3)

-------------------------------------------
Time: 6000 ms
-------------------------------------------
(b,3)
(c,1)
(a,5)

-------------------------------------------
Time: 7000 ms
-------------------------------------------
(a,5)
(b,3)
(c,1)

[info] - mapWithState - driver failure recovery (1 second, 786 milliseconds)
[info] BlockGeneratorSuite:
[info] - block generation and data callbacks (30 milliseconds)
[info] - stop ensures correct shutdown (232 milliseconds)
[info] - block push errors are reported (27 milliseconds)
[info] StreamingJobProgressListenerSuite:
[info] - onBatchSubmitted, onBatchStarted, onBatchCompleted, onReceiverStarted, onReceiverError, onReceiverStopped (89 milliseconds)
[info] - Remove the old completed batches when exceeding the limit (96 milliseconds)
[info] - out-of-order onJobStart and onBatchXXX (374 milliseconds)
[info] - detect memory leak (381 milliseconds)
[info] ExecutorAllocationManagerSuite:
[info] - basic functionality (89 milliseconds)
[info] - requestExecutors policy (163 milliseconds)
[info] - killExecutor policy (21 milliseconds)
[info] - parameter validation (26 milliseconds)
[info] - enabling and disabling (979 milliseconds)
[info] RateLimiterSuite:
[info] - rate limiter initializes even without a maxRate set (1 millisecond)
[info] - rate limiter updates when below maxRate (1 millisecond)
[info] - rate limiter stays below maxRate despite large updates (0 milliseconds)
[info] StreamingContextSuite:
[info] - from no conf constructor (129 milliseconds)
[info] - from no conf + spark home (108 milliseconds)
[info] - from no conf + spark home + env (98 milliseconds)
[info] - from conf with settings (238 milliseconds)
[info] - from existing SparkContext (145 milliseconds)
[info] - from existing SparkContext with settings (119 milliseconds)
[info] - SPARK-2576 importing implicits (5 seconds, 514 milliseconds)
[info] - from checkpoint (170 milliseconds)
[info] - checkPoint from conf (99 milliseconds)
[info] - state matching (1 millisecond)
[info] - start and stop state check (98 milliseconds)
[info] - start with non-serializable DStream checkpoints (207 milliseconds)
[info] - start failure should stop internal components (70 milliseconds)
[info] - start should set local properties of streaming jobs correctly (219 milliseconds)
[info] - start multiple times (87 milliseconds)
[info] - stop multiple times (97 milliseconds)
[info] - stop before start (66 milliseconds)
[info] - start after stop (69 milliseconds)
[info] - stop only streaming context (331 milliseconds)
[info] - stop(stopSparkContext=true) after stop(stopSparkContext=false) (85 milliseconds)
[info] - Datasets and encoders (3 seconds, 88 milliseconds)
[info] - SPARK-2632 importing a method from non serializable class and not using it. (2 seconds, 509 milliseconds)
[info] - collecting objects of class defined in repl (1 second, 506 milliseconds)
[info] - collecting objects of class defined in repl - shuffling (2 seconds, 6 milliseconds)
[info] - stop gracefully (9 seconds, 218 milliseconds)
[info] - takeSample (23 seconds, 832 milliseconds)
[info] - takeSample from an empty rdd (9 milliseconds)
[info] - stop gracefully even if a receiver misses StopReceiver (796 milliseconds)
[info] - randomSplit (411 milliseconds)
[info] - runJob on an invalid partition (4 milliseconds)
[info] - sort an empty RDD (36 milliseconds)
[info] - sortByKey (193 milliseconds)
[info] - sortByKey ascending parameter (150 milliseconds)
[info] - sortByKey with explicit ordering (83 milliseconds)
[info] - repartitionAndSortWithinPartitions (78 milliseconds)
[info] - cartesian on empty RDD (12 milliseconds)
[info] - cartesian on non-empty RDDs (66 milliseconds)
[info] - intersection (63 milliseconds)
[info] - intersection strips duplicates in an input (47 milliseconds)
[info] - zipWithIndex (28 milliseconds)
[info] - zipWithIndex with a single partition (11 milliseconds)
[info] - zipWithIndex chained with other RDDs (SPARK-4433) (36 milliseconds)
[info] - zipWithUniqueId (51 milliseconds)
[info] - retag with implicit ClassTag (24 milliseconds)
[info] - parent method (21 milliseconds)
[info] - getNarrowAncestors (32 milliseconds)
[info] - getNarrowAncestors with multiple parents (18 milliseconds)
[info] - getNarrowAncestors with cycles (17 milliseconds)
[info] - task serialization exception should not hang scheduler (26 milliseconds)
[info] - RDD.partitions() fails fast when partitions indicies are incorrect (SPARK-13021) (2 milliseconds)
[info] - nested RDDs are not supported (SPARK-5063) (15 milliseconds)
[info] - actions cannot be performed inside of transformations (SPARK-5063) (23 milliseconds)
[info] - custom RDD coalescer (411 milliseconds)
[info] - SPARK-18406: race between end-of-task and completion iterator read lock release (22 milliseconds)
[info] - SPARK-23496: order of input partitions can result in severe skew in coalesce (6 milliseconds)
[info] - cannot run actions after SparkContext has been stopped (SPARK-5063) (179 milliseconds)
[info] - cannot call methods on a stopped SparkContext (SPARK-5063) (2 milliseconds)
[info] BasicSchedulerIntegrationSuite:
[info] - super simple job (178 milliseconds)
[info] - multi-stage job (914 milliseconds)
[info] - replicating blocks of object with class defined in repl (6 seconds, 342 milliseconds)
[info] - job with fetch failure (1 second, 146 milliseconds)
[info] - job failure after 4 attempts (851 milliseconds)
[info] OutputCommitCoordinatorSuite:
[info] - Only one of two duplicate commit tasks should commit (197 milliseconds)
[info] - If commit fails, if task is retried it should not be locked, and will succeed. (229 milliseconds)
[info] - yarn-cluster should respect conf overrides in SparkHadoopUtil (SPARK-16414, SPARK-23630) (32 seconds, 49 milliseconds)
[info] - should clone and clean line object in ClosureCleaner (7 seconds, 34 milliseconds)
[info] - Job should not complete if all commits are denied (5 seconds, 6 milliseconds)
[info] - SPARK-31399: should clone+clean line object w/ non-serializable state in ClosureCleaner (2 seconds, 6 milliseconds)
[info] - Only authorized committer failures can clear the authorized committer lock (SPARK-6614) (7 milliseconds)
[info] - SPARK-19631: Do not allow failed attempts to be authorized for committing (4 milliseconds)
[info] - SPARK-24589: Differentiate tasks from different stage attempts (6 milliseconds)
[info] - SPARK-31399: ClosureCleaner should discover indirectly nested closure in inner class (2 seconds, 14 milliseconds)
[info] - SPARK-24589: Make sure stage state is cleaned up (1 second, 563 milliseconds)
[info] TaskMetricsSuite:
[info] - mutating values (1 millisecond)
[info] - mutating shuffle read metrics values (1 millisecond)
[info] - mutating shuffle write metrics values (0 milliseconds)
[info] - mutating input metrics values (1 millisecond)
[info] - mutating output metrics values (0 milliseconds)
[info] - merging multiple shuffle read metrics (1 millisecond)
[info] - additional accumulables (1 millisecond)
[info] OutputCommitCoordinatorIntegrationSuite:
[info] - exception thrown in OutputCommitter.commitTask() (246 milliseconds)
[info] UIUtilsSuite:
[info] - makeDescription(plainText = false) (58 milliseconds)
[info] - makeDescription(plainText = true) (12 milliseconds)
[info] - SPARK-11906: Progress bar should not overflow because of speculative tasks (3 milliseconds)
[info] - decodeURLParameter (SPARK-12708: Sorting task error in Stages Page when yarn mode.) (1 millisecond)
[info] - SPARK-20393: Prevent newline characters in parameters. (1 millisecond)
[info] - SPARK-20393: Prevent script from parameters running on page. (1 millisecond)
[info] - SPARK-20393: Prevent javascript from parameters running on page. (1 millisecond)
[info] - SPARK-20393: Prevent links from parameters on page. (0 milliseconds)
[info] - SPARK-20393: Prevent popups from parameters on page. (0 milliseconds)
[info] SumEvaluatorSuite:
[info] - correct handling of count 1 (4 milliseconds)
[info] - correct handling of count 0 (2 milliseconds)
[info] - correct handling of NaN (1 millisecond)
[info] - correct handling of > 1 values (25 milliseconds)
[info] - test count > 1 (5 milliseconds)
[info] ApplicationCacheSuite:
[info] - Completed UI get (65 milliseconds)
[info] - Test that if an attempt ID is set, it must be used in lookups (5 milliseconds)
[info] - newProductSeqEncoder with REPL defined class (1 second, 6 milliseconds)
[info] - Incomplete apps refreshed (18 milliseconds)
[info] - stop slow receiver gracefully (16 seconds, 57 milliseconds)
[info] - registering and de-registering of streamingSource (290 milliseconds)
[info] - Large Scale Application Eviction (673 milliseconds)
[info] - Attempts are Evicted (22 milliseconds)
[info] - SPARK-28709 registering and de-registering of progressListener (228 milliseconds)
[info] - redirect includes query params (42 milliseconds)
[info] RpcAddressSuite:
[info] - hostPort (2 milliseconds)
[info] - fromSparkURL (2 milliseconds)
[info] - fromSparkURL: a typo url (2 milliseconds)
[info] - fromSparkURL: invalid scheme (2 milliseconds)
[info] - toSparkURL (1 millisecond)
[info] HistoryServerSuite:
[info] - awaitTermination (2 seconds, 133 milliseconds)
[info] - awaitTermination after stop (223 milliseconds)
[info] - awaitTermination with error in task (237 milliseconds)
[info] - application list json (2 seconds, 190 milliseconds)
[info] - awaitTermination with error in job generation (398 milliseconds)
[info] - completed app list json (71 milliseconds)
[info] - running app list json (25 milliseconds)
[info] - minDate app list json (16 milliseconds)
[info] - maxDate app list json (31 milliseconds)
[info] - maxDate2 app list json (12 milliseconds)
[info] - minEndDate app list json (30 milliseconds)
[info] - maxEndDate app list json (17 milliseconds)
[info] - minEndDate and maxEndDate app list json (28 milliseconds)
[info] - minDate and maxEndDate app list json (12 milliseconds)
[info] - limit app list json (16 milliseconds)
[info] - one app json (127 milliseconds)
[info] - one app multi-attempt json (15 milliseconds)
[info] - awaitTerminationOrTimeout (1 second, 235 milliseconds)
[info] HashExpressionsSuite:
[info] - getOrCreate (2 seconds, 17 milliseconds)
[info] - md5 (1 second, 630 milliseconds)
[info] - sha1 (214 milliseconds)
[info] - getActive and getActiveOrCreate (821 milliseconds)
[info] - sha2 (475 milliseconds)
[info] - job list json (3 seconds, 935 milliseconds)
[info] - crc32 (190 milliseconds)
[info] - hive-hash for null (8 milliseconds)
[info] - hive-hash for boolean (2 milliseconds)
[info] - hive-hash for byte (3 milliseconds)
[info] - hive-hash for short (1 millisecond)
[info] - hive-hash for int (2 milliseconds)
[info] - hive-hash for long (3 milliseconds)
[info] - hive-hash for float (2 milliseconds)
[info] - hive-hash for double (2 milliseconds)
[info] - hive-hash for string (1 millisecond)
[info] - hive-hash for date type (6 milliseconds)
[info] - hive-hash for timestamp type (26 milliseconds)
[info] - hive-hash for CalendarInterval type (6 milliseconds)
[info] - hive-hash for array (2 milliseconds)
[info] - hive-hash for map (1 millisecond)
[info] - hive-hash for struct (2 milliseconds)
[info] - job list from multi-attempt app json(1) (573 milliseconds)
[info] - getActiveOrCreate with checkpoint (848 milliseconds)
[info] - multiple streaming contexts (80 milliseconds)
[info] - job list from multi-attempt app json(2) (541 milliseconds)
[info] - one job json (11 milliseconds)
[info] - succeeded job list json (12 milliseconds)
[info] - succeeded&failed job list json (14 milliseconds)
[info] - executor list json (20 milliseconds)
[info] - stage list json (81 milliseconds)
[info] - complete stage list json (11 milliseconds)
[info] - failed stage list json (16 milliseconds)
[info] - DStream and generated RDD creation sites (558 milliseconds)
[info] - one stage json (74 milliseconds)
[info] - one stage attempt json (45 milliseconds)
[info] - throw exception on using active or stopped context (130 milliseconds)
[info] - queueStream doesn't support checkpointing (323 milliseconds)
[info] - stage task summary w shuffle write (767 milliseconds)
[info] - stage task summary w shuffle read (41 milliseconds)
[info] - stage task summary w/ custom quantiles (77 milliseconds)
[info] - stage task list (32 milliseconds)
[info] - stage task list w/ offset & length (54 milliseconds)
[info] - stage task list w/ sortBy (26 milliseconds)
[info] - stage task list w/ sortBy short names: -runtime (20 milliseconds)
[info] - stage task list w/ sortBy short names: runtime (19 milliseconds)
[info] - stage list with accumulable json (31 milliseconds)
[info] - stage with accumulable json (34 milliseconds)
[info] - stage task list from multi-attempt app json(1) (14 milliseconds)
[info] - stage task list from multi-attempt app json(2) (20 milliseconds)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@51724fa9 rejected from java.util.concurrent.ThreadPoolExecutor@5fb5abf7[Shutting down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 1]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:326)
	at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:326)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
[info] - Creating an InputDStream but not using it should not crash (1 second, 28 milliseconds)
[info] - blacklisting for stage (528 milliseconds)
[info] - murmur3/xxHash64/hive hash: struct<null:null,boolean:boolean,byte:tinyint,short:smallint,int:int,long:bigint,float:float,double:double,bigDecimal:decimal(38,18),smallDecimal:decimal(10,0),string:string,binary:binary,date:date,timestamp:timestamp,udt:examplepoint> (3 seconds, 351 milliseconds)
[info] - blacklisting node for stage (533 milliseconds)
[info] - rdd list storage json (15 milliseconds)
[info] - executor node blacklisting (548 milliseconds)
[info] - SPARK-30633: xxHash64 with long seed: struct<null:null,boolean:boolean,byte:tinyint,short:smallint,int:int,long:bigint,float:float,double:double,bigDecimal:decimal(38,18),smallDecimal:decimal(10,0),string:string,binary:binary,date:date,timestamp:timestamp,udt:examplepoint> (760 milliseconds)
[info] - executor node blacklisting unblacklisting (513 milliseconds)
[info] - executor memory usage (14 milliseconds)
[info] - app environment (47 milliseconds)
[info] - download all logs for app with multiple attempts (4 seconds, 831 milliseconds)
[info] - download one log for app with multiple attempts (690 milliseconds)
[info] - response codes on bad paths (73 milliseconds)
[info] - automatically retrieve uiRoot from request through Knox (83 milliseconds)
[info] - static relative links are prefixed with uiRoot (spark.ui.proxyBase) (29 milliseconds)
[info] - /version api endpoint (7 milliseconds)
[info] - run Spark in yarn-client mode with additional jar (32 seconds, 139 milliseconds)
Exception in thread "streaming-job-executor-0" java.lang.Error: java.lang.InterruptedException
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1155)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.InterruptedException
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
	at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:206)
	at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:222)
	at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:157)
	at org.apache.spark.util.ThreadUtils$.awaitReady(ThreadUtils.scala:243)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:729)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
	at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:990)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
	at org.apache.spark.rdd.RDD.withScope(RDD.scala:385)
	at org.apache.spark.rdd.RDD.collect(RDD.scala:989)
	at org.apache.spark.streaming.StreamingContextSuite$$anonfun$57$$anonfun$apply$21.apply(StreamingContextSuite.scala:850)
	at org.apache.spark.streaming.StreamingContextSuite$$anonfun$57$$anonfun$apply$21.apply(StreamingContextSuite.scala:848)
	at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:628)
	at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:628)
	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:51)
	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51)
	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51)
	at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416)
	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:50)
	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
	at scala.util.Try$.apply(Try.scala:192)
	at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39)
	at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:257)
	at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257)
	at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257)
	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
	at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:256)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	... 2 more
[info] - SPARK-18560 Receiver data should be deserialized properly. (13 seconds, 821 milliseconds)
[info] - SPARK-22955 graceful shutdown shouldn't lead to job generation error (171 milliseconds)
[info] RateControllerSuite:
[info] - RateController - rate controller publishes updates after batches complete (505 milliseconds)
[info] - ajax rendered relative links are prefixed with uiRoot (spark.ui.proxyBase) (7 seconds, 124 milliseconds)
[info] - security manager starts with spark.authenticate set (34 milliseconds)
[info] - ReceiverRateController - published rates reach receivers (593 milliseconds)
[info] FailureSuite:
[info] - incomplete apps get refreshed (6 seconds, 188 milliseconds)
[info] - ui and api authorization checks (3 seconds, 201 milliseconds)
[info] - access history application defaults to the last attempt id (743 milliseconds)
[info] JVMObjectTrackerSuite:
[info] - JVMObjectId does not take null IDs (3 milliseconds)
[info] - JVMObjectTracker (4 milliseconds)
[info] BlockManagerSuite:
[info] - StorageLevel object caching (1 millisecond)
[info] - BlockManagerId object caching (1 millisecond)
[info] - BlockManagerId.isDriver() backwards-compatibility with legacy driver ids (SPARK-6716) (0 milliseconds)
[info] - master + 1 manager interaction (91 milliseconds)
[info] - master + 2 managers interaction (227 milliseconds)
[info] - removing block (91 milliseconds)
[info] - removing rdd (126 milliseconds)
[info] - removing broadcast (587 milliseconds)
[info] - reregistration on heart beat (106 milliseconds)
[info] - reregistration on block update (64 milliseconds)
[info] - reregistration doesn't dead lock (1 second, 191 milliseconds)
[info] - correct BlockResult returned from get() calls (107 milliseconds)
[info] - optimize a location order of blocks without topology information (84 milliseconds)
[info] - optimize a location order of blocks with topology information (51 milliseconds)
[info] - SPARK-9591: getRemoteBytes from another location when Exception throw (382 milliseconds)
[info] - SPARK-14252: getOrElseUpdate should still read from remote storage (130 milliseconds)
[info] - in-memory LRU storage (56 milliseconds)
[info] - in-memory LRU storage with serialization (186 milliseconds)
[info] - in-memory LRU storage with off-heap (159 milliseconds)
[info] - in-memory LRU for partitions of same RDD (66 milliseconds)
[info] - in-memory LRU for partitions of multiple RDDs (59 milliseconds)
[info] - on-disk storage (encryption = off) (139 milliseconds)
[info] - on-disk storage (encryption = on) (117 milliseconds)
[info] - disk and memory storage (encryption = off) (92 milliseconds)
[info] - disk and memory storage (encryption = on) (125 milliseconds)
[info] - disk and memory storage with getLocalBytes (encryption = off) (83 milliseconds)
[info] - disk and memory storage with getLocalBytes (encryption = on) (124 milliseconds)
[info] - disk and memory storage with serialization (encryption = off) (129 milliseconds)
[info] - disk and memory storage with serialization (encryption = on) (152 milliseconds)
[info] - disk and memory storage with serialization and getLocalBytes (encryption = off) (80 milliseconds)
[info] - disk and memory storage with serialization and getLocalBytes (encryption = on) (73 milliseconds)
[info] - disk and off-heap memory storage (encryption = off) (131 milliseconds)
[info] - disk and off-heap memory storage (encryption = on) (154 milliseconds)
[info] - disk and off-heap memory storage with getLocalBytes (encryption = off) (70 milliseconds)
[info] - disk and off-heap memory storage with getLocalBytes (encryption = on) (80 milliseconds)
[info] - LRU with mixed storage levels (encryption = off) (154 milliseconds)
[info] - LRU with mixed storage levels (encryption = on) (147 milliseconds)
[info] - in-memory LRU with streams (encryption = off) (47 milliseconds)
[info] - in-memory LRU with streams (encryption = on) (48 milliseconds)
[info] - LRU with mixed storage levels and streams (encryption = off) (262 milliseconds)
[info] - LRU with mixed storage levels and streams (encryption = on) (271 milliseconds)
[info] - negative byte values in ByteBufferInputStream (2 milliseconds)
[info] - overly large block (76 milliseconds)
[info] - block compression (547 milliseconds)
[info] - block store put failure (31 milliseconds)
[info] - turn off updated block statuses (45 milliseconds)
[info] - updated block statuses (151 milliseconds)
[info] - query block statuses (117 milliseconds)
[info] - get matching blocks (403 milliseconds)
[info] - SPARK-1194 regression: fix the same-RDD rule for cache replacement (66 milliseconds)
[info] - safely unroll blocks through putIterator (disk) (83 milliseconds)
[info] - read-locked blocks cannot be evicted from memory (130 milliseconds)
[info] - remove block if a read fails due to missing DiskStore files (SPARK-15736) (349 milliseconds)
[info] - SPARK-13328: refresh block locations (fetch should fail after hitting a threshold) (118 milliseconds)
[info] - SPARK-13328: refresh block locations (fetch should succeed after location refresh) (53 milliseconds)
[info] - SPARK-17484: block status is properly updated following an exception in put() (125 milliseconds)
[info] - SPARK-17484: master block locations are updated following an invalid remote block fetch (178 milliseconds)
[info] - murmur3/xxHash64/hive hash: struct<arrayOfNull:array<null>,arrayOfString:array<string>,arrayOfArrayOfString:array<array<string>>,arrayOfArrayOfInt:array<array<int>>,arrayOfMap:array<map<string,string>>,arrayOfStruct:array<struct<str:string>>,arrayOfUDT:array<examplepoint>> (40 seconds, 373 milliseconds)
[info] - run Spark in yarn-cluster mode with additional jar (29 seconds, 55 milliseconds)
[info] - SPARK-20640: Shuffle registration timeout and maxAttempts conf are working (5 seconds, 276 milliseconds)
[info] - fetch remote block to local disk if block size is larger than threshold (28 milliseconds)
[info] - query locations of blockIds (6 milliseconds)
[info] CompactBufferSuite:
[info] - empty buffer (2 milliseconds)
[info] - basic inserts (6 milliseconds)
[info] - adding sequences (3 milliseconds)
[info] - adding the same buffer to itself (2 milliseconds)
[info] MasterSuite:
[info] - can use a custom recovery mode factory (64 milliseconds)
[info] - master correctly recover the application (530 milliseconds)
[info] - master/worker web ui available (287 milliseconds)
[info] - SPARK-30633: xxHash64 with long seed: struct<arrayOfNull:array<null>,arrayOfString:array<string>,arrayOfArrayOfString:array<array<string>>,arrayOfArrayOfInt:array<array<int>>,arrayOfMap:array<map<string,string>>,arrayOfStruct:array<struct<str:string>>,arrayOfUDT:array<examplepoint>> (10 seconds, 820 milliseconds)
[info] - multiple failures with map (39 seconds, 995 milliseconds)
[info] - run Spark in yarn-cluster mode unsuccessfully (17 seconds, 29 milliseconds)
[info] - murmur3/xxHash64/hive hash: struct<mapOfIntAndString:map<int,string>,mapOfStringAndArray:map<string,array<string>>,mapOfArrayAndInt:map<array<string>,int>,mapOfArray:map<array<string>,array<string>>,mapOfStringAndStruct:map<string,struct<str:string>>,mapOfStructAndString:map<struct<str:string>,string>,mapOfStruct:map<struct<str:string>,struct<str:string>>> (15 seconds, 806 milliseconds)
[info] - master/worker web ui available with reverseProxy (30 seconds, 218 milliseconds)
[info] - basic scheduling - spread out (126 milliseconds)
[info] - basic scheduling - no spread out (57 milliseconds)
[info] - basic scheduling with more memory - spread out (70 milliseconds)
[info] - basic scheduling with more memory - no spread out (66 milliseconds)
[info] - scheduling with max cores - spread out (73 milliseconds)
[info] - scheduling with max cores - no spread out (63 milliseconds)
[info] - scheduling with cores per executor - spread out (59 milliseconds)
[info] - scheduling with cores per executor - no spread out (70 milliseconds)
[info] - scheduling with cores per executor AND max cores - spread out (75 milliseconds)
[info] - scheduling with cores per executor AND max cores - no spread out (68 milliseconds)
[info] - scheduling with executor limit - spread out (69 milliseconds)
[info] - scheduling with executor limit - no spread out (52 milliseconds)
[info] - scheduling with executor limit AND max cores - spread out (73 milliseconds)
[info] - scheduling with executor limit AND max cores - no spread out (112 milliseconds)
[info] - scheduling with executor limit AND cores per executor - spread out (70 milliseconds)
[info] - scheduling with executor limit AND cores per executor - no spread out (71 milliseconds)
[info] - scheduling with executor limit AND cores per executor AND max cores - spread out (69 milliseconds)
[info] - scheduling with executor limit AND cores per executor AND max cores - no spread out (107 milliseconds)
[info] - SPARK-30633: xxHash64 with long seed: struct<mapOfIntAndString:map<int,string>,mapOfStringAndArray:map<string,array<string>>,mapOfArrayAndInt:map<array<string>,int>,mapOfArray:map<array<string>,array<string>>,mapOfStringAndStruct:map<string,struct<str:string>>,mapOfStructAndString:map<struct<str:string>,string>,mapOfStruct:map<struct<str:string>,struct<str:string>>> (8 seconds, 905 milliseconds)
[info] - SPARK-13604: Master should ask Worker kill unknown executors and drivers (134 milliseconds)
[info] - SPARK-20529: Master should reply the address received from worker (86 milliseconds)
[info] - SPARK-19900: there should be a corresponding driver for the app after relaunching driver (2 seconds, 381 milliseconds)
[info] CompletionIteratorSuite:
[info] - basic test (1 millisecond)
[info] - reference to sub iterator should not be available after completion (734 milliseconds)
[info] SparkListenerSuite:
[info] - don't call sc.stop in listener (118 milliseconds)
[info] - basic creation and shutdown of LiveListenerBus (5 milliseconds)
[info] - bus.stop() waits for the event queue to completely drain (3 milliseconds)
[info] - metrics for dropped listener events (5 milliseconds)
[info] - basic creation of StageInfo (97 milliseconds)
[info] - basic creation of StageInfo with shuffle (343 milliseconds)
[info] - StageInfo with fewer tasks than partitions (103 milliseconds)
[info] - murmur3/xxHash64/hive hash: struct<structOfString:struct<str:string>,structOfStructOfString:struct<struct:struct<str:string>>,structOfArray:struct<array:array<string>>,structOfMap:struct<map:map<string,string>>,structOfArrayAndMap:struct<array:array<string>,map:map<string,string>>,structOfUDT:struct<udt:examplepoint>> (5 seconds, 153 milliseconds)
[info] - local metrics (1 second, 849 milliseconds)
[info] - SPARK-30633: xxHash64 with long seed: struct<structOfString:struct<str:string>,structOfStructOfString:struct<struct:struct<str:string>>,structOfArray:struct<array:array<string>>,structOfMap:struct<map:map<string,string>>,structOfArrayAndMap:struct<array:array<string>,map:map<string,string>>,structOfUDT:struct<udt:examplepoint>> (1 second, 417 milliseconds)
[info] - hive-hash for decimal (4 milliseconds)
[info] - onTaskGettingResult() called when result fetched remotely (530 milliseconds)
[info] - onTaskGettingResult() not called when result sent directly (102 milliseconds)
[info] - onTaskEnd() should be called for all started tasks, even after job has been killed (129 milliseconds)
[info] - SparkListener moves on if a listener throws an exception (13 milliseconds)
[info] - registering listeners via spark.extraListeners (86 milliseconds)
[info] - add and remove listeners to/from LiveListenerBus queues (4 milliseconds)
[info] - interrupt within listener is handled correctly: throw interrupt (25 milliseconds)
[info] - interrupt within listener is handled correctly: set Thread interrupted (22 milliseconds)
[info] - SPARK-30285: Fix deadlock in AsyncEventQueue.removeListenerOnError: throw interrupt (14 milliseconds)
[info] - SPARK-30285: Fix deadlock in AsyncEventQueue.removeListenerOnError: set Thread interrupted (18 milliseconds)
[info] SortShuffleSuite:
[info] - groupByKey without compression (257 milliseconds)
[info] - SPARK-18207: Compute hash for a lot of expressions (1 second, 142 milliseconds)
[info] - multiple failures with updateStateByKey (36 seconds, 267 milliseconds)
[info] BasicOperationsSuite:
[info] - shuffle non-zero block size (6 seconds, 786 milliseconds)
[info] - map (280 milliseconds)
[info] - flatMap (410 milliseconds)
[info] - filter (458 milliseconds)
[info] - glom (384 milliseconds)
[info] - mapPartitions (383 milliseconds)
[info] - repartition (more partitions) (799 milliseconds)
[info] - repartition (fewer partitions) (843 milliseconds)
[info] - groupByKey (416 milliseconds)
[info] - reduceByKey (583 milliseconds)
[info] - reduce (1 second, 760 milliseconds)
[info] - count (407 milliseconds)
[info] - shuffle serializer (6 seconds, 873 milliseconds)
[info] - countByValue (345 milliseconds)
[info] - mapValues (334 milliseconds)
[info] - flatMapValues (326 milliseconds)
[info] - union (261 milliseconds)
[info] - run Spark in yarn-cluster mode failure after sc initialized (40 seconds, 49 milliseconds)
[info] - union with input stream return None (256 milliseconds)
[info] - StreamingContext.union (419 milliseconds)
[info] - transform (552 milliseconds)
[info] - transform with NULL (124 milliseconds)
[info] - transform with input stream return None (254 milliseconds)
[info] - transformWith (564 milliseconds)
[info] - transformWith with input stream return None (375 milliseconds)
[info] - StreamingContext.transform (411 milliseconds)
[info] - StreamingContext.transform with input stream return None (320 milliseconds)
[info] - cogroup (557 milliseconds)
[info] - join (715 milliseconds)
[info] - SPARK-22284: Compute hash for nested structs (19 seconds, 205 milliseconds)
[info] - leftOuterJoin (495 milliseconds)
[info] - SPARK-30633: xxHash with different type seeds (404 milliseconds)
[info] - rightOuterJoin (1 second, 3 milliseconds)
[info] - fullOuterJoin (746 milliseconds)
[info] - updateStateByKey (512 milliseconds)
[info] - updateStateByKey - simple with initial value RDD (361 milliseconds)
[info] CastSuite:
[info] - updateStateByKey - testing time stamps as input (618 milliseconds)
[info] - updateStateByKey - with initial value RDD (637 milliseconds)
[info] - updateStateByKey - object lifecycle (588 milliseconds)
[info] - zero sized blocks (11 seconds, 657 milliseconds)
[info] - slice (2 seconds, 222 milliseconds)
[info] - slice - has not been initialized (109 milliseconds)
[info] - rdd cleanup - map and window (600 milliseconds)
[info] - rdd cleanup - updateStateByKey (3 seconds, 260 milliseconds)
Exception in thread "receiver-supervisor-future-0" java.lang.Error: java.lang.InterruptedException: sleep interrupted
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1155)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.InterruptedException: sleep interrupted
	at java.lang.Thread.sleep(Native Method)
	at org.apache.spark.streaming.receiver.ReceiverSupervisor$$anonfun$restartReceiver$1.apply$mcV$sp(ReceiverSupervisor.scala:196)
	at org.apache.spark.streaming.receiver.ReceiverSupervisor$$anonfun$restartReceiver$1.apply(ReceiverSupervisor.scala:189)
	at org.apache.spark.streaming.receiver.ReceiverSupervisor$$anonfun$restartReceiver$1.apply(ReceiverSupervisor.scala:189)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	... 2 more
[info] - rdd cleanup - input blocks and persisted RDDs (2 seconds, 277 milliseconds)
[info] JavaStreamingListenerWrapperSuite:
[info] - basic (17 milliseconds)
[info] Test run started
[info] Test org.apache.spark.streaming.JavaWriteAheadLogSuite.testCustomWAL started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.011s
[info] Test run started
[info] Test org.apache.spark.streaming.JavaMapWithStateSuite.testBasicFunction started
[info] - null cast (10 seconds, 373 milliseconds)
[info] - cast string to date (331 milliseconds)
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.856s
[info] Test run started
[info] Test org.apache.spark.streaming.JavaDurationSuite.testGreaterEq started
[info] Test org.apache.spark.streaming.JavaDurationSuite.testDiv started
[info] Test org.apache.spark.streaming.JavaDurationSuite.testMinus started
[info] Test org.apache.spark.streaming.JavaDurationSuite.testTimes started
[info] Test org.apache.spark.streaming.JavaDurationSuite.testLess started
[info] Test org.apache.spark.streaming.JavaDurationSuite.testPlus started
[info] Test org.apache.spark.streaming.JavaDurationSuite.testGreater started
[info] Test org.apache.spark.streaming.JavaDurationSuite.testMinutes started
[info] Test org.apache.spark.streaming.JavaDurationSuite.testMilliseconds started
[info] Test org.apache.spark.streaming.JavaDurationSuite.testLessEq started
[info] Test org.apache.spark.streaming.JavaDurationSuite.testSeconds started
[info] Test run finished: 0 failed, 0 ignored, 11 total, 0.006s
[info] Test run started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testStreamingContextTransform started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testFlatMapValues started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testReduceByWindowWithInverse started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testMapPartitions started
[info] - zero sized blocks without kryo (9 seconds, 619 milliseconds)
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testPairFilter started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testRepartitionFewerPartitions started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testCombineByKey started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testContextGetOrCreate started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testWindowWithSlideDuration started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testQueueStream started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testCountByValue started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testMap started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testPairToNormalRDDTransform started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testPairReduceByKey started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testCount started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testCheckpointMasterRecovery started
[info] - shuffle on mutable pairs (5 seconds, 290 milliseconds)
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testPairMap started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testUnion started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testFlatMap started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testReduceByKeyAndWindowWithInverse started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testGlom started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testJoin started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testPairFlatMap started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testPairToPairFlatMapWithChangingTypes started
[info] - sorting on mutable pairs (6 seconds, 62 milliseconds)
[info] - run Python application in yarn-client mode (32 seconds, 101 milliseconds)
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testPairMapPartitions started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testRepartitionMorePartitions started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testReduceByWindowWithoutInverse started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testLeftOuterJoin started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testVariousTransform started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testTransformWith started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testVariousTransformWith started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testTextFileStream started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testPairGroupByKey started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testCoGroup started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testInitialization started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testSocketString started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testGroupByKeyAndWindow started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testReduceByKeyAndWindow started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testForeachRDD started
[info] - cogroup using mutable pairs (6 seconds, 59 milliseconds)
[info] Test test.org.apache.spark.streaming.Jav