Console Output

Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
[EnvInject] - Preparing an environment for the build.
[EnvInject] - Keeping Jenkins system variables.
[EnvInject] - Keeping Jenkins build variables.
[EnvInject] - Injecting as environment variables the properties content 
AMPLAB_JENKINS_BUILD_PROFILE=hadoop2.6
SPARK_BRANCH=branch-2.4
PATH=/home/anaconda/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
LANG=en_US.UTF-8
SPARK_TESTING=1
JAVA_HOME=/usr/java/latest
AMPLAB_JENKINS="true"

[EnvInject] - Variables injected successfully.
[EnvInject] - Injecting contributions.
Building remotely on research-jenkins-worker-01 (ubuntu20 worker-01 ubuntu) in workspace /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6
The recommended git tool is: NONE
No credentials specified
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/spark.git # timeout=10
Fetching upstream changes from https://github.com/apache/spark.git
 > git --version # timeout=10
 > git --version # 'git version 2.25.1'
 > git fetch --tags --force --progress -- https://github.com/apache/spark.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git rev-parse origin/branch-2.4^{commit} # timeout=10
Checking out Revision 7733510d0403625c41710d7e79f810117aac2ced (origin/branch-2.4)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 7733510d0403625c41710d7e79f810117aac2ced # timeout=10
Commit message: "[SPARK-35288][SQL] StaticInvoke should find the method without exact argument classes match"
 > git rev-list --no-walk eaf0553990be7261b669bd2fbb9b92873e257276 # timeout=10
[spark-branch-2.4-test-sbt-hadoop-2.6] $ /bin/bash /tmp/jenkins5360370976254545717.sh
Removing R/lib/
Removing R/pkg/man/
Removing assembly/target/
Removing build/sbt-launch-0.13.17.jar
Removing build/scala-2.11.12/
Removing build/zinc-0.3.15/
Removing common/kvstore/target/
Removing common/network-common/target/
Removing common/network-shuffle/target/
Removing common/network-yarn/target/
Removing common/sketch/target/
Removing common/tags/target/
Removing common/unsafe/target/
Removing core/derby.log
Removing core/dummy/
Removing core/ignored/
Removing core/metastore_db/
Removing core/target/
Removing derby.log
Removing dev/__pycache__/
Removing dev/create-release/__pycache__/
Removing dev/lint-r-report.log
Removing dev/pr-deps/
Removing dev/pycodestyle-2.4.0.py
Removing dev/sparktestsupport/__init__.pyc
Removing dev/sparktestsupport/__pycache__/
Removing dev/sparktestsupport/modules.pyc
Removing dev/sparktestsupport/shellutils.pyc
Removing dev/sparktestsupport/toposort.pyc
Removing dev/target/
Removing examples/src/main/python/__pycache__/
Removing examples/src/main/python/ml/__pycache__/
Removing examples/src/main/python/mllib/__pycache__/
Removing examples/src/main/python/sql/__pycache__/
Removing examples/src/main/python/sql/streaming/__pycache__/
Removing examples/src/main/python/streaming/__pycache__/
Removing examples/target/
Removing external/avro/spark-warehouse/
Removing external/avro/target/
Removing external/flume-assembly/target/
Removing external/flume-sink/target/
Removing external/flume/checkpoint/
Removing external/flume/target/
Removing external/kafka-0-10-assembly/target/
Removing external/kafka-0-10-sql/spark-warehouse/
Removing external/kafka-0-10-sql/target/
Removing external/kafka-0-10/target/
Removing external/kafka-0-8-assembly/target/
Removing external/kafka-0-8/target/
Removing external/kinesis-asl-assembly/target/
Removing external/kinesis-asl/checkpoint/
Removing external/kinesis-asl/src/main/python/examples/streaming/__pycache__/
Removing external/kinesis-asl/target/
Removing external/spark-ganglia-lgpl/target/
Removing graphx/target/
Removing launcher/target/
Removing lib/
Removing logs/
Removing metastore_db/
Removing mllib-local/target/
Removing mllib/checkpoint/
Removing mllib/spark-warehouse/
Removing mllib/target/
Removing project/project/
Removing project/target/
Removing python/__pycache__/
Removing python/docs/__pycache__/
Removing python/docs/_build/
Removing python/docs/epytext.pyc
Removing python/lib/pyspark.zip
Removing python/pyspark/__init__.pyc
Removing python/pyspark/__pycache__/
Removing python/pyspark/_globals.pyc
Removing python/pyspark/accumulators.pyc
Removing python/pyspark/broadcast.pyc
Removing python/pyspark/cloudpickle.pyc
Removing python/pyspark/conf.pyc
Removing python/pyspark/context.pyc
Removing python/pyspark/files.pyc
Removing python/pyspark/find_spark_home.pyc
Removing python/pyspark/heapq3.pyc
Removing python/pyspark/java_gateway.pyc
Removing python/pyspark/join.pyc
Removing python/pyspark/ml/__init__.pyc
Removing python/pyspark/ml/__pycache__/
Removing python/pyspark/ml/base.pyc
Removing python/pyspark/ml/classification.pyc
Removing python/pyspark/ml/clustering.pyc
Removing python/pyspark/ml/common.pyc
Removing python/pyspark/ml/evaluation.pyc
Removing python/pyspark/ml/feature.pyc
Removing python/pyspark/ml/fpm.pyc
Removing python/pyspark/ml/image.pyc
Removing python/pyspark/ml/linalg/__init__.pyc
Removing python/pyspark/ml/linalg/__pycache__/
Removing python/pyspark/ml/param/__init__.pyc
Removing python/pyspark/ml/param/__pycache__/
Removing python/pyspark/ml/param/shared.pyc
Removing python/pyspark/ml/pipeline.pyc
Removing python/pyspark/ml/recommendation.pyc
Removing python/pyspark/ml/regression.pyc
Removing python/pyspark/ml/stat.pyc
Removing python/pyspark/ml/tuning.pyc
Removing python/pyspark/ml/util.pyc
Removing python/pyspark/ml/wrapper.pyc
Removing python/pyspark/mllib/__init__.pyc
Removing python/pyspark/mllib/__pycache__/
Removing python/pyspark/mllib/classification.pyc
Removing python/pyspark/mllib/clustering.pyc
Removing python/pyspark/mllib/common.pyc
Removing python/pyspark/mllib/evaluation.pyc
Removing python/pyspark/mllib/feature.pyc
Removing python/pyspark/mllib/fpm.pyc
Removing python/pyspark/mllib/linalg/__init__.pyc
Removing python/pyspark/mllib/linalg/__pycache__/
Removing python/pyspark/mllib/linalg/distributed.pyc
Removing python/pyspark/mllib/random.pyc
Removing python/pyspark/mllib/recommendation.pyc
Removing python/pyspark/mllib/regression.pyc
Removing python/pyspark/mllib/stat/KernelDensity.pyc
Removing python/pyspark/mllib/stat/__init__.pyc
Removing python/pyspark/mllib/stat/__pycache__/
Removing python/pyspark/mllib/stat/_statistics.pyc
Removing python/pyspark/mllib/stat/distribution.pyc
Removing python/pyspark/mllib/stat/test.pyc
Removing python/pyspark/mllib/tree.pyc
Removing python/pyspark/mllib/util.pyc
Removing python/pyspark/profiler.pyc
Removing python/pyspark/rdd.pyc
Removing python/pyspark/rddsampler.pyc
Removing python/pyspark/resultiterable.pyc
Removing python/pyspark/serializers.pyc
Removing python/pyspark/shuffle.pyc
Removing python/pyspark/sql/__init__.pyc
Removing python/pyspark/sql/__pycache__/
Removing python/pyspark/sql/catalog.pyc
Removing python/pyspark/sql/column.pyc
Removing python/pyspark/sql/conf.pyc
Removing python/pyspark/sql/context.pyc
Removing python/pyspark/sql/dataframe.pyc
Removing python/pyspark/sql/functions.pyc
Removing python/pyspark/sql/group.pyc
Removing python/pyspark/sql/readwriter.pyc
Removing python/pyspark/sql/session.pyc
Removing python/pyspark/sql/streaming.pyc
Removing python/pyspark/sql/types.pyc
Removing python/pyspark/sql/udf.pyc
Removing python/pyspark/sql/utils.pyc
Removing python/pyspark/sql/window.pyc
Removing python/pyspark/statcounter.pyc
Removing python/pyspark/status.pyc
Removing python/pyspark/storagelevel.pyc
Removing python/pyspark/streaming/__init__.pyc
Removing python/pyspark/streaming/__pycache__/
Removing python/pyspark/streaming/context.pyc
Removing python/pyspark/streaming/dstream.pyc
Removing python/pyspark/streaming/flume.pyc
Removing python/pyspark/streaming/kafka.pyc
Removing python/pyspark/streaming/kinesis.pyc
Removing python/pyspark/streaming/listener.pyc
Removing python/pyspark/streaming/util.pyc
Removing python/pyspark/taskcontext.pyc
Removing python/pyspark/traceback_utils.pyc
Removing python/pyspark/util.pyc
Removing python/pyspark/version.pyc
Removing python/test_coverage/__pycache__/
Removing python/test_support/__pycache__/
Removing repl/spark-warehouse/
Removing repl/target/
Removing resource-managers/kubernetes/core/target/
Removing resource-managers/kubernetes/integration-tests/tests/__pycache__/
Removing resource-managers/mesos/target/
Removing resource-managers/yarn/target/
Removing scalastyle-on-compile.generated.xml
Removing spark-warehouse/
Removing sql/__pycache__/
Removing sql/catalyst/loc/
Removing sql/catalyst/target/
Removing sql/core/loc/
Removing sql/core/paris/
Removing sql/core/spark-warehouse/
Removing sql/core/target/
Removing sql/hive-thriftserver/derby.log
Removing sql/hive-thriftserver/metastore_db/
Removing sql/hive-thriftserver/spark-warehouse/
Removing sql/hive-thriftserver/target/
Removing sql/hive/derby.log
Removing sql/hive/loc/
Removing sql/hive/metastore_db/
Removing sql/hive/src/test/resources/data/scripts/__pycache__/
Removing sql/hive/target/
Removing streaming/checkpoint/
Removing streaming/target/
Removing target/
Removing tools/target/
Removing work/
+++ dirname /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/install-dev.sh
++ cd /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R
++ pwd
+ FWDIR=/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R
+ LIB_DIR=/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/lib
+ mkdir -p /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/lib
+ pushd /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R
+ . /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/find-r.sh
++ '[' -z '' ']'
++ '[' '!' -z '' ']'
+++ command -v R
++ '[' '!' /usr/bin/R ']'
++++ which R
+++ dirname /usr/bin/R
++ R_SCRIPT_PATH=/usr/bin
++ echo 'Using R_SCRIPT_PATH = /usr/bin'
Using R_SCRIPT_PATH = /usr/bin
+ . /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/create-rd.sh
++ set -o pipefail
++ set -e
++++ dirname /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/create-rd.sh
+++ cd /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R
+++ pwd
++ FWDIR=/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R
++ pushd /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R
++ . /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/find-r.sh
+++ '[' -z /usr/bin ']'
++ /usr/bin/Rscript -e ' if("devtools" %in% rownames(installed.packages())) { library(devtools); devtools::document(pkg="./pkg", roclets=c("rd")) }'
Loading required package: usethis
Updating SparkR documentation
First time using roxygen2. Upgrading automatically...
Updating roxygen version in /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/pkg/DESCRIPTION
Loading SparkR
Creating a new generic function for ‘as.data.frame’ in package ‘SparkR’
Creating a new generic function for ‘colnames’ in package ‘SparkR’
Creating a new generic function for ‘colnames<-’ in package ‘SparkR’
Creating a new generic function for ‘cov’ in package ‘SparkR’
Creating a new generic function for ‘drop’ in package ‘SparkR’
Creating a new generic function for ‘na.omit’ in package ‘SparkR’
Creating a new generic function for ‘filter’ in package ‘SparkR’
Creating a new generic function for ‘intersect’ in package ‘SparkR’
Creating a new generic function for ‘sample’ in package ‘SparkR’
Creating a new generic function for ‘transform’ in package ‘SparkR’
Creating a new generic function for ‘subset’ in package ‘SparkR’
Creating a new generic function for ‘summary’ in package ‘SparkR’
Creating a new generic function for ‘union’ in package ‘SparkR’
Creating a new generic function for ‘endsWith’ in package ‘SparkR’
Creating a new generic function for ‘startsWith’ in package ‘SparkR’
Creating a new generic function for ‘lag’ in package ‘SparkR’
Creating a new generic function for ‘rank’ in package ‘SparkR’
Creating a new generic function for ‘sd’ in package ‘SparkR’
Creating a new generic function for ‘var’ in package ‘SparkR’
Creating a new generic function for ‘window’ in package ‘SparkR’
Creating a new generic function for ‘predict’ in package ‘SparkR’
Creating a new generic function for ‘rbind’ in package ‘SparkR’
Creating a generic function for ‘substr’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘%in%’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘lapply’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘Filter’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘nrow’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘ncol’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘factorial’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘atan2’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘ifelse’ from package ‘base’ in package ‘SparkR’
Warning: [/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/pkg/R/SQLContext.R:592] @name May only use one @name per block
Warning: [/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/pkg/R/SQLContext.R:733] @name May only use one @name per block
Writing structType.Rd
Writing print.structType.Rd
Writing structField.Rd
Writing print.structField.Rd
Writing summarize.Rd
Writing alias.Rd
Writing arrange.Rd
Writing as.data.frame.Rd
Writing cache.Rd
Writing checkpoint.Rd
Writing coalesce.Rd
Writing collect.Rd
Writing columns.Rd
Writing coltypes.Rd
Writing count.Rd
Writing cov.Rd
Writing corr.Rd
Writing createOrReplaceTempView.Rd
Writing cube.Rd
Writing dapply.Rd
Writing dapplyCollect.Rd
Writing gapply.Rd
Writing gapplyCollect.Rd
Writing describe.Rd
Writing distinct.Rd
Writing drop.Rd
Writing dropDuplicates.Rd
Writing nafunctions.Rd
Writing dtypes.Rd
Writing explain.Rd
Writing except.Rd
Writing exceptAll.Rd
Writing filter.Rd
Writing first.Rd
Writing groupBy.Rd
Writing hint.Rd
Writing insertInto.Rd
Writing intersect.Rd
Writing intersectAll.Rd
Writing isLocal.Rd
Writing isStreaming.Rd
Writing limit.Rd
Writing localCheckpoint.Rd
Writing merge.Rd
Writing mutate.Rd
Writing orderBy.Rd
Writing persist.Rd
Writing printSchema.Rd
Writing registerTempTable-deprecated.Rd
Writing rename.Rd
Writing repartition.Rd
Writing repartitionByRange.Rd
Writing sample.Rd
Writing rollup.Rd
Writing sampleBy.Rd
Writing saveAsTable.Rd
Writing take.Rd
Writing write.df.Rd
Writing write.jdbc.Rd
Writing write.json.Rd
Writing write.orc.Rd
Writing write.parquet.Rd
Writing write.stream.Rd
Writing write.text.Rd
Writing schema.Rd
Writing select.Rd
Writing selectExpr.Rd
Writing showDF.Rd
Writing subset.Rd
Writing summary.Rd
Writing union.Rd
Writing unionByName.Rd
Writing unpersist.Rd
Writing with.Rd
Writing withColumn.Rd
Writing withWatermark.Rd
Writing randomSplit.Rd
Writing broadcast.Rd
Writing columnfunctions.Rd
Writing between.Rd
Writing cast.Rd
Writing endsWith.Rd
Writing startsWith.Rd
Writing column_nonaggregate_functions.Rd
Writing otherwise.Rd
Writing over.Rd
Writing eq_null_safe.Rd
Writing partitionBy.Rd
Writing rowsBetween.Rd
Writing rangeBetween.Rd
Writing windowPartitionBy.Rd
Writing windowOrderBy.Rd
Writing column_datetime_diff_functions.Rd
Writing column_aggregate_functions.Rd
Writing column_collection_functions.Rd
Writing column_string_functions.Rd
Writing avg.Rd
Writing column_math_functions.Rd
Writing column.Rd
Writing column_misc_functions.Rd
Writing column_window_functions.Rd
Writing column_datetime_functions.Rd
Writing last.Rd
Writing not.Rd
Writing fitted.Rd
Writing predict.Rd
Writing rbind.Rd
Writing spark.als.Rd
Writing spark.bisectingKmeans.Rd
Writing spark.gaussianMixture.Rd
Writing spark.gbt.Rd
Writing spark.glm.Rd
Writing spark.isoreg.Rd
Writing spark.kmeans.Rd
Writing spark.kstest.Rd
Writing spark.lda.Rd
Writing spark.logit.Rd
Writing spark.mlp.Rd
Writing spark.naiveBayes.Rd
Writing spark.decisionTree.Rd
Writing spark.randomForest.Rd
Writing spark.survreg.Rd
Writing spark.svmLinear.Rd
Writing spark.fpGrowth.Rd
Writing write.ml.Rd
Writing awaitTermination.Rd
Writing isActive.Rd
Writing lastProgress.Rd
Writing queryName.Rd
Writing status.Rd
Writing stopQuery.Rd
Writing print.jobj.Rd
Writing show.Rd
Writing substr.Rd
Writing match.Rd
Writing GroupedData.Rd
Writing pivot.Rd
Writing SparkDataFrame.Rd
Writing storageLevel.Rd
Writing toJSON.Rd
Writing nrow.Rd
Writing ncol.Rd
Writing dim.Rd
Writing head.Rd
Writing join.Rd
Writing crossJoin.Rd
Writing attach.Rd
Writing str.Rd
Writing histogram.Rd
Writing getNumPartitions.Rd
Writing sparkR.conf.Rd
Writing sparkR.version.Rd
Writing createDataFrame.Rd
Writing read.json.Rd
Writing read.orc.Rd
Writing read.parquet.Rd
Writing read.text.Rd
Writing sql.Rd
Writing tableToDF.Rd
Writing read.df.Rd
Writing read.jdbc.Rd
Writing read.stream.Rd
Writing WindowSpec.Rd
Writing createExternalTable-deprecated.Rd
Writing createTable.Rd
Writing cacheTable.Rd
Writing uncacheTable.Rd
Writing clearCache.Rd
Writing dropTempTable-deprecated.Rd
Writing dropTempView.Rd
Writing tables.Rd
Writing tableNames.Rd
Writing currentDatabase.Rd
Writing setCurrentDatabase.Rd
Writing listDatabases.Rd
Writing listTables.Rd
Writing listColumns.Rd
Writing listFunctions.Rd
Writing recoverPartitions.Rd
Writing refreshTable.Rd
Writing refreshByPath.Rd
Writing spark.addFile.Rd
Writing spark.getSparkFilesRootDirectory.Rd
Writing spark.getSparkFiles.Rd
Writing spark.lapply.Rd
Writing setLogLevel.Rd
Writing setCheckpointDir.Rd
Writing install.spark.Rd
Writing sparkR.callJMethod.Rd
Writing sparkR.callJStatic.Rd
Writing sparkR.newJObject.Rd
Writing LinearSVCModel-class.Rd
Writing LogisticRegressionModel-class.Rd
Writing MultilayerPerceptronClassificationModel-class.Rd
Writing NaiveBayesModel-class.Rd
Writing BisectingKMeansModel-class.Rd
Writing GaussianMixtureModel-class.Rd
Writing KMeansModel-class.Rd
Writing LDAModel-class.Rd
Writing FPGrowthModel-class.Rd
Writing ALSModel-class.Rd
Writing AFTSurvivalRegressionModel-class.Rd
Writing GeneralizedLinearRegressionModel-class.Rd
Writing IsotonicRegressionModel-class.Rd
Writing glm.Rd
Writing KSTest-class.Rd
Writing GBTRegressionModel-class.Rd
Writing GBTClassificationModel-class.Rd
Writing RandomForestRegressionModel-class.Rd
Writing RandomForestClassificationModel-class.Rd
Writing DecisionTreeRegressionModel-class.Rd
Writing DecisionTreeClassificationModel-class.Rd
Writing read.ml.Rd
Writing sparkR.session.stop.Rd
Writing sparkR.init-deprecated.Rd
Writing sparkRSQL.init-deprecated.Rd
Writing sparkRHive.init-deprecated.Rd
Writing sparkR.session.Rd
Writing sparkR.uiWebUrl.Rd
Writing setJobGroup.Rd
Writing clearJobGroup.Rd
Writing cancelJobGroup.Rd
Writing setJobDescription.Rd
Writing setLocalProperty.Rd
Writing getLocalProperty.Rd
Writing crosstab.Rd
Writing freqItems.Rd
Writing approxQuantile.Rd
Writing StreamingQuery.Rd
Writing hashCode.Rd
+ /usr/bin/R CMD INSTALL --library=/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/lib /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/pkg/
* installing *source* package ‘SparkR’ ...
** using staged installation
** R
** inst
** byte-compile and prepare package for lazy loading
Creating a new generic function for ‘as.data.frame’ in package ‘SparkR’
Creating a new generic function for ‘colnames’ in package ‘SparkR’
Creating a new generic function for ‘colnames<-’ in package ‘SparkR’
Creating a new generic function for ‘cov’ in package ‘SparkR’
Creating a new generic function for ‘drop’ in package ‘SparkR’
Creating a new generic function for ‘na.omit’ in package ‘SparkR’
Creating a new generic function for ‘filter’ in package ‘SparkR’
Creating a new generic function for ‘intersect’ in package ‘SparkR’
Creating a new generic function for ‘sample’ in package ‘SparkR’
Creating a new generic function for ‘transform’ in package ‘SparkR’
Creating a new generic function for ‘subset’ in package ‘SparkR’
Creating a new generic function for ‘summary’ in package ‘SparkR’
Creating a new generic function for ‘union’ in package ‘SparkR’
Creating a new generic function for ‘endsWith’ in package ‘SparkR’
Creating a new generic function for ‘startsWith’ in package ‘SparkR’
Creating a new generic function for ‘lag’ in package ‘SparkR’
Creating a new generic function for ‘rank’ in package ‘SparkR’
Creating a new generic function for ‘sd’ in package ‘SparkR’
Creating a new generic function for ‘var’ in package ‘SparkR’
Creating a new generic function for ‘window’ in package ‘SparkR’
Creating a new generic function for ‘predict’ in package ‘SparkR’
Creating a new generic function for ‘rbind’ in package ‘SparkR’
Creating a generic function for ‘substr’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘%in%’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘lapply’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘Filter’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘nrow’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘ncol’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘factorial’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘atan2’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘ifelse’ from package ‘base’ in package ‘SparkR’
** help
*** installing help indices
** building package indices
** installing vignettes
** testing if installed package can be loaded from temporary location
** testing if installed package can be loaded from final location
** testing if installed package keeps a record of temporary installation path
* DONE (SparkR)
+ cd /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/lib
+ jar cfM /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/lib/sparkr.zip SparkR
+ popd
[info] Using build tool sbt with Hadoop profile hadoop2.6 under environment amplab_jenkins
[info] Found the following changed modules: root
[info] Setup the following environment variables for tests: 

========================================================================
Running Apache RAT checks
========================================================================
Attempting to fetch rat
RAT checks passed.

========================================================================
Running Scala style checks
========================================================================
Scalastyle checks passed.

========================================================================
Running Python style checks
========================================================================
pycodestyle checks passed.
rm -rf _build/*
pydoc checks passed.

========================================================================
Running R style checks
========================================================================

Attaching package: ‘SparkR’

The following objects are masked from ‘package:stats’:

    cov, filter, lag, na.omit, predict, sd, var, window

The following objects are masked from ‘package:base’:

    as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
    rank, rbind, sample, startsWith, subset, summary, transform, union


Attaching package: ‘testthat’

The following objects are masked from ‘package:SparkR’:

    describe, not

lintr checks passed.

========================================================================
Running build tests
========================================================================
Using `mvn` from path: /home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/mvn
Using `mvn` from path: /home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/mvn
Performing Maven install for hadoop-2.6
Using `mvn` from path: /home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/mvn
Performing Maven validate for hadoop-2.6
Using `mvn` from path: /home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/mvn
Generating dependency manifest for hadoop-2.6
Using `mvn` from path: /home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/mvn
Performing Maven install for hadoop-2.7
Using `mvn` from path: /home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/mvn
Performing Maven validate for hadoop-2.7
Using `mvn` from path: /home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/mvn
Generating dependency manifest for hadoop-2.7
Using `mvn` from path: /home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/mvn
Performing Maven install for hadoop-3.1
Using `mvn` from path: /home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/mvn
Performing Maven validate for hadoop-3.1
Using `mvn` from path: /home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/mvn
Generating dependency manifest for hadoop-3.1
Using `mvn` from path: /home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/mvn
Using `mvn` from path: /home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/mvn

========================================================================
Building Spark
========================================================================
[info] Building Spark (w/Hive 1.2.1) using SBT with these arguments:  -Phadoop-2.6 -Pkubernetes -Phive-thriftserver -Pflume -Pkinesis-asl -Pyarn -Pkafka-0-8 -Pspark-ganglia-lgpl -Phive -Pmesos test:package streaming-kafka-0-8-assembly/assembly streaming-flume-assembly/assembly streaming-kinesis-asl-assembly/assembly
Using /usr/java/latest as default JAVA_HOME.
Note, this will be overridden by -java-home if it is set.
[info] Loading project definition from /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/project
[info] Set current project to spark-parent (in build file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/)
[info] Avro compiler using stringType=CharSequence
[info] Compiling Avro IDL /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-sink/src/main/avro/sparkflume.avdl
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}spark...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}tools...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}tags...
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-assembly/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/kvstore/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/kvstore/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl-assembly/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-common/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/launcher/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-assembly/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/launcher/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8-assembly/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl-assembly/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/spark-ganglia-lgpl/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/tools/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-shuffle/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/assembly/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8-assembly/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-yarn/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/assembly/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-assembly/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-assembly/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/unsafe/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-yarn/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-common/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/examples/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-shuffle/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/tags/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/tags/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/spark-ganglia-lgpl/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/tools/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-sink/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/target
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/unsafe/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-sink/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib-local/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/kubernetes/core/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/graphx/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/yarn/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/kubernetes/core/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib-local/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/yarn/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/graphx/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/examples/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/streaming/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/streaming/target
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scala-lang/scala-library/2.11.12/scala-library-2.11.12.jar ...
[info] 	[SUCCESSFUL ] org.scala-lang#scala-library;2.11.12!scala-library.jar (856ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scalatest/scalatest_2.11/3.0.3/scalatest_2.11-3.0.3.jar ...
[info] 	[SUCCESSFUL ] org.scalatest#scalatest_2.11;3.0.3!scalatest_2.11.jar(bundle) (1024ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scalactic/scalactic_2.11/3.0.3/scalactic_2.11-3.0.3.jar ...
[info] 	[SUCCESSFUL ] org.scalactic#scalactic_2.11;3.0.3!scalactic_2.11.jar(bundle) (495ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scala-lang/scala-reflect/2.11.12/scala-reflect-2.11.12.jar ...
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive/target
[info] 	[SUCCESSFUL ] org.scala-lang#scala-reflect;2.11.12!scala-reflect.jar (787ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scala-lang/modules/scala-xml_2.11/1.0.5/scala-xml_2.11-1.0.5.jar ...
[info] 	[SUCCESSFUL ] org.scala-lang.modules#scala-xml_2.11;1.0.5!scala-xml_2.11.jar(bundle) (392ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scala-lang/modules/scala-parser-combinators_2.11/1.0.4/scala-parser-combinators_2.11-1.0.4.jar ...
[info] 	[SUCCESSFUL ] org.scala-lang.modules#scala-parser-combinators_2.11;1.0.4!scala-parser-combinators_2.11.jar(bundle) (496ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/com/typesafe/genjavadoc/genjavadoc-plugin_2.11.12/0.14/genjavadoc-plugin_2.11.12-0.14.jar ...
[info] 	[SUCCESSFUL ] com.typesafe.genjavadoc#genjavadoc-plugin_2.11.12;0.14!genjavadoc-plugin_2.11.12.jar (450ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scala-lang/scala-compiler/2.11.12/scala-compiler-2.11.12.jar ...
[info] 	[SUCCESSFUL ] org.scala-lang#scala-compiler;2.11.12!scala-compiler.jar (932ms)
[info] Done updating.
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}launcher...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}network-common...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}unsafe...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}sketch...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}mllib-local...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}kvstore...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-flume-sink...
[info] Compiling 2 Scala sources and 6 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/tags/target/scala-2.11/classes...
[info] Done updating.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/target/scala-2.11/spark-parent_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/target/scala-2.11/spark-parent_2.11-2.4.9-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/clapper/classutil_2.11/1.1.2/classutil_2.11-1.1.2.jar ...
[info] 	[SUCCESSFUL ] org.clapper#classutil_2.11;1.1.2!classutil_2.11.jar (501ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/ow2/asm/asm-util/5.1/asm-util-5.1.jar ...
[info] 	[SUCCESSFUL ] org.ow2.asm#asm-util;5.1!asm-util.jar (352ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/clapper/grizzled-scala_2.11/4.2.0/grizzled-scala_2.11-4.2.0.jar ...
[info] 	[SUCCESSFUL ] org.clapper#grizzled-scala_2.11;4.2.0!grizzled-scala_2.11.jar (477ms)
[info] Done updating.
[info] Compiling 1 Scala source to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/tools/target/scala-2.11/classes...
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/catalyst/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/catalyst/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/flume/flume-ng-sdk/1.6.0/flume-ng-sdk-1.6.0.jar ...
[info] 	[SUCCESSFUL ] org.apache.flume#flume-ng-sdk;1.6.0!flume-ng-sdk.jar (426ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/flume/flume-ng-core/1.6.0/flume-ng-core-1.6.0.jar ...
[info] 	[SUCCESSFUL ] org.apache.flume#flume-ng-core;1.6.0!flume-ng-core.jar (438ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/avro/avro-compiler/1.7.3/avro-compiler-1.7.3.jar ...
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/target
[info] 	[SUCCESSFUL ] org.apache.avro#avro-compiler;1.7.3!avro-compiler.jar (388ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/avro/avro-ipc/1.7.4/avro-ipc-1.7.4.jar ...
[info] 	[SUCCESSFUL ] org.apache.avro#avro-ipc;1.7.4!avro-ipc.jar (405ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/io/netty/netty/3.5.12.Final/netty-3.5.12.Final.jar ...
[info] 	[SUCCESSFUL ] io.netty#netty;3.5.12.Final!netty.jar(bundle) (483ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/flume/flume-ng-configuration/1.6.0/flume-ng-configuration-1.6.0.jar ...
[info] 	[SUCCESSFUL ] org.apache.flume#flume-ng-configuration;1.6.0!flume-ng-configuration.jar (407ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/flume/flume-ng-auth/1.6.0/flume-ng-auth-1.6.0.jar ...
[info] 	[SUCCESSFUL ] org.apache.flume#flume-ng-auth;1.6.0!flume-ng-auth.jar (454ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/commons-codec/commons-codec/1.8/commons-codec-1.8.jar ...
[info] 	[SUCCESSFUL ] commons-codec#commons-codec;1.8!commons-codec.jar (444ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/joda-time/joda-time/2.1/joda-time-2.1.jar ...
[info] 	[SUCCESSFUL ] joda-time#joda-time;2.1!joda-time.jar (471ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/mortbay/jetty/servlet-api/2.5-20110124/servlet-api-2.5-20110124.jar ...
[info] 	[SUCCESSFUL ] org.mortbay.jetty#servlet-api;2.5-20110124!servlet-api.jar (379ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/com/google/code/gson/gson/2.2.2/gson-2.2.2.jar ...
[info] 	[SUCCESSFUL ] com.google.code.gson#gson;2.2.2!gson.jar (443ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/mina/mina-core/2.0.4/mina-core-2.0.4.jar ...
[info] 	[SUCCESSFUL ] org.apache.mina#mina-core;2.0.4!mina-core.jar(bundle) (397ms)
[info] Done updating.
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/com/fasterxml/jackson/core/jackson-databind/2.6.7.3/jackson-databind-2.6.7.3.jar ...
[info] 	[SUCCESSFUL ] com.fasterxml.jackson.core#jackson-databind;2.6.7.3!jackson-databind.jar(bundle) (528ms)
[info] Done updating.
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/mockito/mockito-core/1.10.19/mockito-core-1.10.19.jar ...
[info] 	[SUCCESSFUL ] org.mockito#mockito-core;1.10.19!mockito-core.jar (276ms)
[info] Done updating.
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scalanlp/breeze_2.11/0.13.2/breeze_2.11-0.13.2.jar ...
[info] 	[SUCCESSFUL ] org.scalanlp#breeze_2.11;0.13.2!breeze_2.11.jar (1224ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scalanlp/breeze-macros_2.11/0.13.2/breeze-macros_2.11-0.13.2.jar ...
[info] 	[SUCCESSFUL ] org.scalanlp#breeze-macros_2.11;0.13.2!breeze-macros_2.11.jar (337ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/spire-math/spire_2.11/0.13.0/spire_2.11-0.13.0.jar ...
[info] 	[SUCCESSFUL ] org.spire-math#spire_2.11;0.13.0!spire_2.11.jar (608ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/com/chuusai/shapeless_2.11/2.3.2/shapeless_2.11-2.3.2.jar ...
[info] 	[SUCCESSFUL ] com.chuusai#shapeless_2.11;2.3.2!shapeless_2.11.jar(bundle) (566ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/spire-math/spire-macros_2.11/0.13.0/spire-macros_2.11-0.13.0.jar ...
[info] 	[SUCCESSFUL ] org.spire-math#spire-macros_2.11;0.13.0!spire-macros_2.11.jar (313ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/typelevel/machinist_2.11/0.6.1/machinist_2.11-0.6.1.jar ...
[info] 	[SUCCESSFUL ] org.typelevel#machinist_2.11;0.6.1!machinist_2.11.jar (245ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/typelevel/macro-compat_2.11/1.1.1/macro-compat_2.11-1.1.1.jar ...
[info] 	[SUCCESSFUL ] org.typelevel#macro-compat_2.11;1.1.1!macro-compat_2.11.jar (253ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scalacheck/scalacheck_2.11/1.13.5/scalacheck_2.11-1.13.5.jar ...
[info] 	[SUCCESSFUL ] org.scalacheck#scalacheck_2.11;1.13.5!scalacheck_2.11.jar (546ms)
[info] Done updating.
[info] Done updating.
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/com/twitter/chill_2.11/0.9.3/chill_2.11-0.9.3.jar ...
[info] 	[SUCCESSFUL ] com.twitter#chill_2.11;0.9.3!chill_2.11.jar (418ms)
[info] Done updating.
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/slf4j/slf4j-api/1.7.7/slf4j-api-1.7.7.jar ...
[info] 	[SUCCESSFUL ] org.slf4j#slf4j-api;1.7.7!slf4j-api.jar (87ms)
[info] Done updating.
[info] Compiling 78 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-common/target/scala-2.11/classes...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}network-shuffle...
[info] 'compiler-interface' not yet compiled for Scala 2.11.12. Compiling...
[info] Done updating.
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}core...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}network-yarn...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-common/target/scala-2.11/spark-network-common_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 24 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-shuffle/target/scala-2.11/classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-shuffle/target/scala-2.11/spark-network-shuffle_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[info]   Compilation completed in 11.247 s
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/json4s/json4s-jackson_2.11/3.5.3/json4s-jackson_2.11-3.5.3.jar ...
[info] 	[SUCCESSFUL ] org.json4s#json4s-jackson_2.11;3.5.3!json4s-jackson_2.11.jar (392ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/com/fasterxml/jackson/module/jackson-module-scala_2.11/2.6.7.1/jackson-module-scala_2.11-2.6.7.1.jar ...
[info] 	[SUCCESSFUL ] com.fasterxml.jackson.module#jackson-module-scala_2.11;2.6.7.1!jackson-module-scala_2.11.jar(bundle) (498ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/json4s/json4s-core_2.11/3.5.3/json4s-core_2.11-3.5.3.jar ...
[info] 	[SUCCESSFUL ] org.json4s#json4s-core_2.11;3.5.3!json4s-core_2.11.jar (348ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/json4s/json4s-ast_2.11/3.5.3/json4s-ast_2.11-3.5.3.jar ...
[info] 	[SUCCESSFUL ] org.json4s#json4s-ast_2.11;3.5.3!json4s-ast_2.11.jar (304ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/json4s/json4s-scalap_2.11/3.5.3/json4s-scalap_2.11-3.5.3.jar ...
[info] 	[SUCCESSFUL ] org.json4s#json4s-scalap_2.11;3.5.3!json4s-scalap_2.11.jar (335ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scala-lang/modules/scala-xml_2.11/1.0.6/scala-xml_2.11-1.0.6.jar ...
[info] 	[SUCCESSFUL ] org.scala-lang.modules#scala-xml_2.11;1.0.6!scala-xml_2.11.jar(bundle) (125ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/curator/curator-test/2.6.0/curator-test-2.6.0.jar ...
[info] 	[SUCCESSFUL ] org.apache.curator#curator-test;2.6.0!curator-test.jar (396ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/httpcomponents/httpclient/4.5.1/httpclient-4.5.1.jar ...
[info] 	[SUCCESSFUL ] org.apache.httpcomponents#httpclient;4.5.1!httpclient.jar (412ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/httpcomponents/httpcore/4.4.3/httpcore-4.4.3.jar ...
[info] 	[SUCCESSFUL ] org.apache.httpcomponents#httpcore;4.4.3!httpcore.jar (372ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/net/sourceforge/htmlunit/htmlunit/2.18/htmlunit-2.18.jar ...
[info] 	[SUCCESSFUL ] net.sourceforge.htmlunit#htmlunit;2.18!htmlunit.jar (575ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/httpcomponents/httpmime/4.5/httpmime-4.5.jar ...
[info] 	[SUCCESSFUL ] org.apache.httpcomponents#httpmime;4.5!httpmime.jar (388ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/net/sourceforge/htmlunit/htmlunit-core-js/2.17/htmlunit-core-js-2.17.jar ...
[info] 	[SUCCESSFUL ] net.sourceforge.htmlunit#htmlunit-core-js;2.17!htmlunit-core-js.jar (501ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/xerces/xercesImpl/2.11.0/xercesImpl-2.11.0.jar ...
[info] 	[SUCCESSFUL ] xerces#xercesImpl;2.11.0!xercesImpl.jar (550ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/net/sourceforge/nekohtml/nekohtml/1.9.22/nekohtml-1.9.22.jar ...
[info] 	[SUCCESSFUL ] net.sourceforge.nekohtml#nekohtml;1.9.22!nekohtml.jar (450ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/net/sourceforge/cssparser/cssparser/0.9.16/cssparser-0.9.16.jar ...
[info] 	[SUCCESSFUL ] net.sourceforge.cssparser#cssparser;0.9.16!cssparser.jar (443ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/eclipse/jetty/websocket/websocket-client/9.2.12.v20150709/websocket-client-9.2.12.v20150709.jar ...
[info] 	[SUCCESSFUL ] org.eclipse.jetty.websocket#websocket-client;9.2.12.v20150709!websocket-client.jar (322ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/eclipse/jetty/websocket/websocket-common/9.2.12.v20150709/websocket-common-9.2.12.v20150709.jar ...
[info] 	[SUCCESSFUL ] org.eclipse.jetty.websocket#websocket-common;9.2.12.v20150709!websocket-common.jar (425ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/eclipse/jetty/websocket/websocket-api/9.2.12.v20150709/websocket-api-9.2.12.v20150709.jar ...
[info] 	[SUCCESSFUL ] org.eclipse.jetty.websocket#websocket-api;9.2.12.v20150709!websocket-api.jar (460ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/spark-project/hive/hive-exec/1.2.1.spark2/hive-exec-1.2.1.spark2.jar ...
[info] 	[SUCCESSFUL ] org.spark-project.hive#hive-exec;1.2.1.spark2!hive-exec.jar (978ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/spark-project/hive/hive-metastore/1.2.1.spark2/hive-metastore-1.2.1.spark2.jar ...
[info] 	[SUCCESSFUL ] org.spark-project.hive#hive-metastore;1.2.1.spark2!hive-metastore.jar (703ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/calcite/calcite-avatica/1.2.0-incubating/calcite-avatica-1.2.0-incubating.jar ...
[info] 	[SUCCESSFUL ] org.apache.calcite#calcite-avatica;1.2.0-incubating!calcite-avatica.jar (413ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/httpcomponents/httpclient/4.4.1/httpclient-4.4.1.jar ...
[info] 	[SUCCESSFUL ] org.apache.httpcomponents#httpclient;4.4.1!httpclient.jar (469ms)
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}catalyst...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}graphx...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}ganglia-lgpl...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}mesos...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}kubernetes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/tags/target/scala-2.11/spark-tags_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 4 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/tags/target/scala-2.11/test-classes...
[info] Compiling 6 Scala sources and 3 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-sink/target/scala-2.11/classes...
[info] Compiling 5 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib-local/target/scala-2.11/classes...
[info] Compiling 12 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/kvstore/target/scala-2.11/classes...
[info] Compiling 16 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/unsafe/target/scala-2.11/classes...
[info] Compiling 9 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/target/scala-2.11/classes...
[info] Compiling 20 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/launcher/target/scala-2.11/classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/tools/target/scala-2.11/spark-tools_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/tools/target/scala-2.11/spark-tools_2.11-2.4.9-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Done updating.
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}yarn...
[info] Compiling 2 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-yarn/target/scala-2.11/classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/tags/target/scala-2.11/spark-tags_2.11-2.4.9-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Compiling 21 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-common/target/scala-2.11/test-classes...
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/src/main/java/org/apache/spark/util/sketch/Platform.java:22:  Unsafe is internal proprietary API and may be removed in a future release
[warn] import sun.misc.Unsafe;
[warn]                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/src/main/java/org/apache/spark/util/sketch/Platform.java:28:  Unsafe is internal proprietary API and may be removed in a future release
[warn]   private static final Unsafe _UNSAFE;
[warn]                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/src/main/java/org/apache/spark/util/sketch/Platform.java:150:  Unsafe is internal proprietary API and may be removed in a future release
[warn]     sun.misc.Unsafe unsafe;
[warn]             ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/src/main/java/org/apache/spark/util/sketch/Platform.java:152:  Unsafe is internal proprietary API and may be removed in a future release
[warn]       Field unsafeField = Unsafe.class.getDeclaredField("theUnsafe");
[warn]                           ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/src/main/java/org/apache/spark/util/sketch/Platform.java:154:  Unsafe is internal proprietary API and may be removed in a future release
[warn]       unsafe = (sun.misc.Unsafe) unsafeField.get(null);
[warn]                         ^
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/target/scala-2.11/spark-sketch_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 3 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/target/scala-2.11/test-classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/kvstore/target/scala-2.11/spark-kvstore_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/unsafe/target/scala-2.11/spark-unsafe_2.11-2.4.9-SNAPSHOT.jar ...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/launcher/target/scala-2.11/spark-launcher_2.11-2.4.9-SNAPSHOT.jar ...
[info] Compiling 10 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/kvstore/target/scala-2.11/test-classes...
[info] Done packaging.
[info] Done packaging.
[info] Compiling 1 Scala source and 5 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/unsafe/target/scala-2.11/test-classes...
[info] Compiling 7 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/launcher/target/scala-2.11/test-classes...
[info] Compiling 495 Scala sources and 81 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-yarn/target/scala-2.11/spark-network-yarn_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-yarn/target/scala-2.11/spark-network-yarn_2.11-2.4.9-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-common/src/test/java/org/apache/spark/network/server/OneForOneStreamManagerSuite.java:61:  [unchecked] unchecked generic array creation for varargs parameter of type Class<? extends Throwable>[]
[warn]     Mockito.when(buffers.next()).thenThrow(RuntimeException.class);
[warn]                                           ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-common/src/test/java/org/apache/spark/network/server/OneForOneStreamManagerSuite.java:68:  [unchecked] unchecked generic array creation for varargs parameter of type Class<? extends Throwable>[]
[warn]     Mockito.when(buffers2.next()).thenReturn(mockManagedBuffer).thenThrow(RuntimeException.class);
[warn]                                                                          ^
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-common/target/scala-2.11/spark-network-common_2.11-2.4.9-SNAPSHOT-tests.jar ...
[info] Compiling 13 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-shuffle/target/scala-2.11/test-classes...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/kvstore/target/scala-2.11/spark-kvstore_2.11-2.4.9-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-sink/target/scala-2.11/src_managed/main/compiled_avro/org/apache/spark/streaming/flume/sink/EventBatch.java:243:  [unchecked] unchecked cast
[warn]         record.events = fieldSetFlags()[2] ? this.events : (java.util.List<org.apache.spark.streaming.flume.sink.SparkSinkEvent>) defaultValue(fields()[2]);
[warn]                                                                                                                                               ^
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/launcher/target/scala-2.11/spark-launcher_2.11-2.4.9-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-sink/target/scala-2.11/spark-streaming-flume-sink_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 1 Scala source to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-sink/target/scala-2.11/test-classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/target/scala-2.11/spark-sketch_2.11-2.4.9-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-sink/target/scala-2.11/spark-streaming-flume-sink_2.11-2.4.9-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-shuffle/target/scala-2.11/spark-network-shuffle_2.11-2.4.9-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib-local/target/scala-2.11/spark-mllib-local_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 10 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib-local/target/scala-2.11/test-classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/unsafe/target/scala-2.11/spark-unsafe_2.11-2.4.9-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scala-lang/modules/scala-parser-combinators_2.11/1.1.0/scala-parser-combinators_2.11-1.1.0.jar ...
[info] 	[SUCCESSFUL ] org.scala-lang.modules#scala-parser-combinators_2.11;1.1.0!scala-parser-combinators_2.11.jar(bundle) (448ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/antlr/antlr4/4.7/antlr4-4.7.jar ...
[info] 	[SUCCESSFUL ] org.antlr#antlr4;4.7!antlr4.jar (120ms)
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib-local/target/scala-2.11/spark-mllib-local_2.11-2.4.9-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/io/dropwizard/metrics/metrics-ganglia/3.1.5/metrics-ganglia-3.1.5.jar ...
[info] 	[SUCCESSFUL ] io.dropwizard.metrics#metrics-ganglia;3.1.5!metrics-ganglia.jar(bundle) (402ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/info/ganglia/gmetric4j/gmetric4j/1.0.7/gmetric4j-1.0.7.jar ...
[info] 	[SUCCESSFUL ] info.ganglia.gmetric4j#gmetric4j;1.0.7!gmetric4j.jar (404ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/acplt/oncrpc/1.0.7/oncrpc-1.0.7.jar ...
[info] 	[SUCCESSFUL ] org.acplt#oncrpc;1.0.7!oncrpc.jar (475ms)
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:161: method isRunningLocally in class TaskContext is deprecated: Local execution was removed, so this always returns false
[warn]   override def isRunningLocally(): Boolean = taskContext.isRunningLocally()
[warn]                                                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/SSLOptions.scala:71: constructor SslContextFactory in class SslContextFactory is deprecated: see corresponding Javadoc for more information.
[warn]       val sslContextFactory = new SslContextFactory()
[warn]                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/api/r/RBackend.scala:102: method childGroup in class ServerBootstrap is deprecated: see corresponding Javadoc for more information.
[warn]     if (bootstrap != null && bootstrap.childGroup() != null) {
[warn]                                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/scheduler/StageInfo.scala:59: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]   def attemptNumber(): Int = attemptId
[warn]                              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn]                             ^
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/io/fabric8/kubernetes-client/4.6.1/kubernetes-client-4.6.1.jar ...
[info] 	[SUCCESSFUL ] io.fabric8#kubernetes-client;4.6.1!kubernetes-client.jar (465ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/com/fasterxml/jackson/dataformat/jackson-dataformat-yaml/2.6.7/jackson-dataformat-yaml-2.6.7.jar ...
[info] 	[SUCCESSFUL ] com.fasterxml.jackson.dataformat#jackson-dataformat-yaml;2.6.7!jackson-dataformat-yaml.jar(bundle) (443ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/io/fabric8/kubernetes-model/4.6.1/kubernetes-model-4.6.1.jar ...
[info] 	[SUCCESSFUL ] io.fabric8#kubernetes-model;4.6.1!kubernetes-model.jar(bundle) (856ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/com/squareup/okhttp3/okhttp/3.12.0/okhttp-3.12.0.jar ...
[info] 	[SUCCESSFUL ] com.squareup.okhttp3#okhttp;3.12.0!okhttp.jar (491ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/com/squareup/okhttp3/logging-interceptor/3.12.0/logging-interceptor-3.12.0.jar ...
[info] 	[SUCCESSFUL ] com.squareup.okhttp3#logging-interceptor;3.12.0!logging-interceptor.jar (425ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/slf4j/jul-to-slf4j/1.7.26/jul-to-slf4j-1.7.26.jar ...
[info] 	[SUCCESSFUL ] org.slf4j#jul-to-slf4j;1.7.26!jul-to-slf4j.jar (428ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/io/fabric8/kubernetes-model-common/4.6.1/kubernetes-model-common-4.6.1.jar ...
[info] 	[SUCCESSFUL ] io.fabric8#kubernetes-model-common;4.6.1!kubernetes-model-common.jar (404ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/com/fasterxml/jackson/module/jackson-module-jaxb-annotations/2.9.9/jackson-module-jaxb-annotations-2.9.9.jar ...
[info] 	[SUCCESSFUL ] com.fasterxml.jackson.module#jackson-module-jaxb-annotations;2.9.9!jackson-module-jaxb-annotations.jar(bundle) (380ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/yaml/snakeyaml/1.15/snakeyaml-1.15.jar ...
[info] 	[SUCCESSFUL ] org.yaml#snakeyaml;1.15!snakeyaml.jar(bundle) (387ms)
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/hadoop/hadoop-yarn-server-web-proxy/2.6.5/hadoop-yarn-server-web-proxy-2.6.5.jar ...
[info] 	[SUCCESSFUL ] org.apache.hadoop#hadoop-yarn-server-web-proxy;2.6.5!hadoop-yarn-server-web-proxy.jar (406ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/hadoop/hadoop-yarn-server-tests/2.6.5/hadoop-yarn-server-tests-2.6.5-tests.jar ...
[info] 	[SUCCESSFUL ] org.apache.hadoop#hadoop-yarn-server-tests;2.6.5!hadoop-yarn-server-tests.jar (477ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/hadoop/hadoop-yarn-server-resourcemanager/2.6.5/hadoop-yarn-server-resourcemanager-2.6.5.jar ...
[info] 	[SUCCESSFUL ] org.apache.hadoop#hadoop-yarn-server-resourcemanager;2.6.5!hadoop-yarn-server-resourcemanager.jar (484ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/hadoop/hadoop-yarn-server-applicationhistoryservice/2.6.5/hadoop-yarn-server-applicationhistoryservice-2.6.5.jar ...
[info] 	[SUCCESSFUL ] org.apache.hadoop#hadoop-yarn-server-applicationhistoryservice;2.6.5!hadoop-yarn-server-applicationhistoryservice.jar (400ms)
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-kinesis-asl...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-kafka-0-8...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-kafka-0-10...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-flume...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}sql...
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/com/fasterxml/jackson/dataformat/jackson-dataformat-cbor/2.6.7/jackson-dataformat-cbor-2.6.7.jar ...
[info] 	[SUCCESSFUL ] com.fasterxml.jackson.dataformat#jackson-dataformat-cbor;2.6.7!jackson-dataformat-cbor.jar(bundle) (244ms)
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[warn] 5 warnings found
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/SSLOptions.scala:71: constructor SslContextFactory in class SslContextFactory is deprecated: see corresponding Javadoc for more information.
[warn]       val sslContextFactory = new SslContextFactory()
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:161: method isRunningLocally in class TaskContext is deprecated: Local execution was removed, so this always returns false
[warn]   override def isRunningLocally(): Boolean = taskContext.isRunningLocally()
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/scheduler/StageInfo.scala:59: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]   def attemptNumber(): Int = attemptId
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/api/r/RBackend.scala:102: method childGroup in class ServerBootstrap is deprecated: see corresponding Javadoc for more information.
[warn]     if (bootstrap != null && bootstrap.childGroup() != null) {
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/spark-core_2.11-2.4.9-SNAPSHOT.jar ...
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/orc/orc-core/1.5.5/orc-core-1.5.5-nohive.jar ...
[info] Done packaging.
[info] Compiling 20 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/target/scala-2.11/classes...
[info] Compiling 37 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/kubernetes/core/target/scala-2.11/classes...
[info] Compiling 103 Scala sources and 6 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/streaming/target/scala-2.11/classes...
[info] Compiling 26 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/yarn/target/scala-2.11/classes...
[info] Compiling 38 Scala sources and 5 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/graphx/target/scala-2.11/classes...
[info] Compiling 1 Scala source to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/spark-ganglia-lgpl/target/scala-2.11/classes...
[info] 	[SUCCESSFUL ] org.apache.orc#orc-core;1.5.5!orc-core.jar (713ms)
[info] Compiling 240 Scala sources and 31 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/catalyst/target/scala-2.11/classes...
[info] Compiling 240 Scala sources and 26 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/test-classes...
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/orc/orc-mapreduce/1.5.5/orc-mapreduce-1.5.5-nohive.jar ...
[info] 	[SUCCESSFUL ] org.apache.orc#orc-mapreduce;1.5.5!orc-mapreduce.jar (646ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/orc/orc-shims/1.5.5/orc-shims-1.5.5.jar ...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/spark-ganglia-lgpl/target/scala-2.11/spark-ganglia-lgpl_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/spark-ganglia-lgpl/target/scala-2.11/spark-ganglia-lgpl_2.11-2.4.9-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] 	[SUCCESSFUL ] org.apache.orc#orc-shims;1.5.5!orc-shims.jar (402ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/postgresql/postgresql/9.4.1207.jre7/postgresql-9.4.1207.jre7.jar ...
[info] 	[SUCCESSFUL ] org.postgresql#postgresql;9.4.1207.jre7!postgresql.jar(bundle) (501ms)
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9
[warn] 	    +- org.apache.arrow:arrow-vector:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.arrow:arrow-memory:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-network-common_2.11:2.4.9-SNAPSHOT (depends on 1.3.9)
[warn] 	    +- org.apache.hadoop:hadoop-common:2.6.5              (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-unsafe_2.11:2.4.9-SNAPSHOT  (depends on 1.3.9)
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:87: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       fwInfoBuilder.setRole(role)
[warn]                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:224: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]     role.foreach { r => builder.setRole(r) }
[warn]                                 ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:260: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]             Option(r.getRole), reservation)
[warn]                      ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:263: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]             Option(r.getRole), reservation)
[warn]                      ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:521: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       role.foreach { r => builder.setRole(r) }
[warn]                                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:537: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]         (RoleResourceInfo(resource.getRole, reservation),
[warn]                                    ^
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/kubernetes/core/target/scala-2.11/spark-kubernetes_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/yarn/target/scala-2.11/spark-yarn_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[warn] 6 warnings found
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:87: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       fwInfoBuilder.setRole(role)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:224: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]     role.foreach { r => builder.setRole(r) }
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:260: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]             Option(r.getRole), reservation)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:263: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]             Option(r.getRole), reservation)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:521: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       role.foreach { r => builder.setRole(r) }
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:537: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]         (RoleResourceInfo(resource.getRole, reservation),
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/target/scala-2.11/spark-mesos_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/graphx/target/scala-2.11/spark-graphx_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.5.12.Final, 3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 	    +- org.apache.flume:flume-ng-core:1.6.0               (depends on 3.5.12.Final)
[warn] 	    +- org.apache.flume:flume-ng-sdk:1.6.0                (depends on 3.5.12.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/streaming/target/scala-2.11/spark-streaming_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 10 Scala sources and 2 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/target/scala-2.11/classes...
[info] Compiling 11 Scala sources and 2 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/target/scala-2.11/classes...
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumeEventCount.scala:59: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createStream(ssc, host, port, StorageLevel.MEMORY_ONLY_SER_2)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createPollingStream(ssc, host, port)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:259: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val dstream = FlumeUtils.createStream(jssc, hostname, port, storageLevel, enableDecompression)
[warn]                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:275: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val dstream = FlumeUtils.createPollingStream(
[warn]                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:107: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val kinesisClient = new AmazonKinesisClient(credentials)
[warn]                         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:108: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     kinesisClient.setEndpoint(endpointUrl)
[warn]                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:223: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val kinesisClient = new AmazonKinesisClient(new DefaultAWSCredentialsProviderChain())
[warn]                         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:224: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     kinesisClient.setEndpoint(endpoint)
[warn]                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:140: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]   private val client = new AmazonKinesisClient(credentials)
[warn]                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:150: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]   client.setEndpoint(endpointUrl)
[warn]          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:219: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information.
[warn]     getRecordsRequest.setRequestCredentials(credentials)
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:240: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information.
[warn]     getShardIteratorRequest.setRequestCredentials(credentials)
[warn]                             ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisReceiver.scala:187: constructor Worker in class Worker is deprecated: see corresponding Javadoc for more information.
[warn]     worker = new Worker(recordProcessorFactory, kinesisClientLibConfiguration)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:58: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val client = new AmazonKinesisClient(KinesisTestUtils.getAWSCredentials())
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:59: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     client.setEndpoint(endpointUrl)
[warn]            ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:64: constructor AmazonDynamoDBClient in class AmazonDynamoDBClient is deprecated: see corresponding Javadoc for more information.
[warn]     val dynamoDBClient = new AmazonDynamoDBClient(new DefaultAWSCredentialsProviderChain())
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:65: method setRegion in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     dynamoDBClient.setRegion(RegionUtils.getRegion(regionName))
[warn]                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:606: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]       KinesisUtils.createStream(jssc.ssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn]                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:613: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]         KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn]                      ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:616: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]         KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn]                      ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/SparkAWSCredentials.scala:76: method withLongLivedCredentialsProvider in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       .withLongLivedCredentialsProvider(longLivedCreds.provider)
[warn]        ^
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/kafka/kafka_2.11/0.8.2.1/kafka_2.11-0.8.2.1.jar ...
[warn] four warnings found
[info] 	[SUCCESSFUL ] org.apache.kafka#kafka_2.11;0.8.2.1!kafka_2.11.jar (706ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scala-lang/modules/scala-parser-combinators_2.11/1.0.2/scala-parser-combinators_2.11-1.0.2.jar ...
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:259: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val dstream = FlumeUtils.createStream(jssc, hostname, port, storageLevel, enableDecompression)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:275: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val dstream = FlumeUtils.createPollingStream(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createPollingStream(ssc, host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createPollingStream(ssc, host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumeEventCount.scala:59: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createStream(ssc, host, port, StorageLevel.MEMORY_ONLY_SER_2)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/target/scala-2.11/spark-streaming-flume_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[warn] 17 warnings found
[info] 	[SUCCESSFUL ] org.scala-lang.modules#scala-parser-combinators_2.11;1.0.2!scala-parser-combinators_2.11.jar(bundle) (418ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/com/101tec/zkclient/0.3/zkclient-0.3.jar ...
[info] 	[SUCCESSFUL ] com.101tec#zkclient;0.3!zkclient.jar (364ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/kafka/kafka-clients/0.8.2.1/kafka-clients-0.8.2.1.jar ...
[info] 	[SUCCESSFUL ] org.apache.kafka#kafka-clients;0.8.2.1!kafka-clients.jar (437ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/net/jpountz/lz4/lz4/1.2.0/lz4-1.2.0.jar ...
[info] Note: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/java/org/apache/spark/examples/streaming/JavaKinesisWordCountASL.java uses or overrides a deprecated API.
[info] Note: Recompile with -Xlint:deprecation for details.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:140: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]   private val client = new AmazonKinesisClient(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:150: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]   client.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:219: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information.
[warn]     getRecordsRequest.setRequestCredentials(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:240: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information.
[warn]     getShardIteratorRequest.setRequestCredentials(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:606: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]       KinesisUtils.createStream(jssc.ssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:613: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]         KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:616: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]         KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:107: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val kinesisClient = new AmazonKinesisClient(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:108: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     kinesisClient.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:223: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val kinesisClient = new AmazonKinesisClient(new DefaultAWSCredentialsProviderChain())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:224: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     kinesisClient.setEndpoint(endpoint)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/SparkAWSCredentials.scala:76: method withLongLivedCredentialsProvider in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       .withLongLivedCredentialsProvider(longLivedCreds.provider)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:58: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val client = new AmazonKinesisClient(KinesisTestUtils.getAWSCredentials())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:59: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     client.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:64: constructor AmazonDynamoDBClient in class AmazonDynamoDBClient is deprecated: see corresponding Javadoc for more information.
[warn]     val dynamoDBClient = new AmazonDynamoDBClient(new DefaultAWSCredentialsProviderChain())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:65: method setRegion in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     dynamoDBClient.setRegion(RegionUtils.getRegion(regionName))
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisReceiver.scala:187: constructor Worker in class Worker is deprecated: see corresponding Javadoc for more information.
[warn]     worker = new Worker(recordProcessorFactory, kinesisClientLibConfiguration)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/target/scala-2.11/spark-streaming-kinesis-asl_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[info] 	[SUCCESSFUL ] net.jpountz.lz4#lz4;1.2.0!lz4.jar (533ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/net/sf/jopt-simple/jopt-simple/3.2/jopt-simple-3.2.jar ...
[info] 	[SUCCESSFUL ] net.sf.jopt-simple#jopt-simple;3.2!jopt-simple.jar (87ms)
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Compiling 11 Scala sources and 1 Java source to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/target/scala-2.11/classes...
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:89: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   protected val kc = new KafkaCluster(kafkaParams)
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:172: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       OffsetRange(tp.topic, tp.partition, fo, uo.offset)
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:217: object KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val leaders = KafkaCluster.checkErrors(kc.findLeaders(topics))
[warn]                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:222: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]            context.sparkContext, kafkaParams, b.map(OffsetRange(_)), leaders, messageHandler)
[warn]                                                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:58: trait HasOffsetRanges in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   ) extends RDD[R](sc, Nil) with Logging with HasOffsetRanges {
[warn]                                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val offsetRanges: Array[OffsetRange],
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:152: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val kc = new KafkaCluster(kafkaParams)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:268: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]         OffsetRange(tp.topic, tp.partition, fo, uo.offset)
[warn]         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:633: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder](
[warn]     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:647: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn]                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:648: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker]): JavaRDD[(Array[Byte], Array[Byte])] = {
[warn]                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:657: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn]                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:658: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker]): JavaRDD[Array[Byte]] = {
[warn]                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:670: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn]                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:671: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker],
[warn]                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:673: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createRDD[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V](
[warn]     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:676: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges.toArray(new Array[OffsetRange](offsetRanges.size())),
[warn]                                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:720: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val kc = new KafkaCluster(Map(kafkaParams.asScala.toSeq: _*))
[warn]                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:721: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       KafkaUtils.getFromOffsets(
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:725: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createDirectStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V](
[warn]     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset)
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset)
[warn]                      ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def createBroker(host: String, port: JInt): Broker = Broker(host, port)
[warn]                                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: object Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def createBroker(host: String, port: JInt): Broker = Broker(host, port)
[warn]                                                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:740: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def offsetRangesOfKafkaRDD(rdd: RDD[_]): JList[OffsetRange] = {
[warn]                                            ^
[warn] 25 warnings found
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:633: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder](
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:647: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:648: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker]): JavaRDD[(Array[Byte], Array[Byte])] = {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:657: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:658: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker]): JavaRDD[Array[Byte]] = {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:670: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:671: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:673: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createRDD[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V](
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:676: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges.toArray(new Array[OffsetRange](offsetRanges.size())),
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:720: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val kc = new KafkaCluster(Map(kafkaParams.asScala.toSeq: _*))
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:721: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       KafkaUtils.getFromOffsets(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:725: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createDirectStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V](
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def createBroker(host: String, port: JInt): Broker = Broker(host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: object Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def createBroker(host: String, port: JInt): Broker = Broker(host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:740: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def offsetRangesOfKafkaRDD(rdd: RDD[_]): JList[OffsetRange] = {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:89: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   protected val kc = new KafkaCluster(kafkaParams)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:172: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       OffsetRange(tp.topic, tp.partition, fo, uo.offset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:217: object KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val leaders = KafkaCluster.checkErrors(kc.findLeaders(topics))
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:222: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]            context.sparkContext, kafkaParams, b.map(OffsetRange(_)), leaders, messageHandler)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:58: trait HasOffsetRanges in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   ) extends RDD[R](sc, Nil) with Logging with HasOffsetRanges {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val offsetRanges: Array[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val offsetRanges: Array[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:152: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val kc = new KafkaCluster(kafkaParams)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:268: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]         OffsetRange(tp.topic, tp.partition, fo, uo.offset)
[warn] 
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/target/scala-2.11/spark-streaming-kafka-0-8_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/slf4j/slf4j-api/1.7.25/slf4j-api-1.7.25-tests.jar ...
[info] 	[SUCCESSFUL ] org.slf4j#slf4j-api;1.7.25!slf4j-api.jar(test-jar) (446ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/kafka/kafka_2.11/2.0.0/kafka_2.11-2.0.0.jar ...
[info] 	[SUCCESSFUL ] org.apache.kafka#kafka_2.11;2.0.0!kafka_2.11.jar (848ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/com/typesafe/scala-logging/scala-logging_2.11/3.9.0/scala-logging_2.11-3.9.0.jar ...
[info] 	[SUCCESSFUL ] com.typesafe.scala-logging#scala-logging_2.11;3.9.0!scala-logging_2.11.jar(bundle) (399ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/com/101tec/zkclient/0.10/zkclient-0.10.jar ...
[info] 	[SUCCESSFUL ] com.101tec#zkclient;0.10!zkclient.jar (445ms)
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-kinesis-asl-assembly...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}mllib...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-flume-assembly...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}hive...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-kafka-0-8-assembly...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}sql-kafka-0-10...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}avro...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-kafka-0-10-assembly...
[info] Compiling 10 Scala sources and 1 Java source to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/target/scala-2.11/classes...
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:100: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:153: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/DirectKafkaInputDStream.scala:172: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]     val msgs = c.poll(0)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaDataConsumer.scala:200: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     val p = consumer.poll(timeout)
[warn]                      ^
[warn] four warnings found
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/DirectKafkaInputDStream.scala:172: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]     val msgs = c.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:100: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:153: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaDataConsumer.scala:200: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     val p = consumer.poll(timeout)
[warn] 
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/target/scala-2.11/spark-streaming-kafka-0-10_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/util/collection/ExternalAppendOnlyMapSuite.scala:461: postfix operator seconds should be enabled
[warn] by making the implicit value scala.language.postfixOps visible.
[warn] This can be achieved by adding the import clause 'import scala.language.postfixOps'
[warn] or by setting the compiler option -language:postfixOps.
[warn] See the Scaladoc for value scala.language.postfixOps for a discussion
[warn] why the feature should be explicitly enabled.
[warn]     eventually(timeout(5 seconds), interval(200 milliseconds)) {
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/util/collection/ExternalAppendOnlyMapSuite.scala:461: postfix operator milliseconds should be enabled
[warn] by making the implicit value scala.language.postfixOps visible.
[warn]     eventually(timeout(5 seconds), interval(200 milliseconds)) {
[warn]                                                 ^
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/jpmml/pmml-model/1.2.15/pmml-model-1.2.15.jar ...
[info] 	[SUCCESSFUL ] org.jpmml#pmml-model;1.2.15!pmml-model.jar (482ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/jpmml/pmml-schema/1.2.15/pmml-schema-1.2.15.jar ...
[info] 	[SUCCESSFUL ] org.jpmml#pmml-schema;1.2.15!pmml-schema.jar (421ms)
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9
[warn] 	    +- org.apache.arrow:arrow-vector:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.arrow:arrow-memory:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-network-common_2.11:2.4.9-SNAPSHOT (depends on 1.3.9)
[warn] 	    +- org.apache.hadoop:hadoop-common:2.6.5              (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-unsafe_2.11:2.4.9-SNAPSHOT  (depends on 1.3.9)
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.7.0.Final, 3.6.2.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.7.0.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl-assembly/target/scala-2.11/spark-streaming-kinesis-asl-assembly_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl-assembly/target/scala-2.11/spark-streaming-kinesis-asl-assembly_2.11-2.4.9-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:48: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]   implicit def setAccum[A]: AccumulableParam[mutable.Set[A], A] =
[warn]                             ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:49: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     new AccumulableParam[mutable.Set[A], A] {
[warn]         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:86: class Accumulator in package spark is deprecated: use AccumulatorV2
[warn]     val acc: Accumulator[Int] = sc.accumulator(0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:86: method accumulator in class SparkContext is deprecated: use AccumulatorV2
[warn]     val acc: Accumulator[Int] = sc.accumulator(0)
[warn]                                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:86: object IntAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2
[warn]     val acc: Accumulator[Int] = sc.accumulator(0)
[warn]                                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:92: method accumulator in class SparkContext is deprecated: use AccumulatorV2
[warn]     val longAcc = sc.accumulator(0L)
[warn]                      ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:92: object LongAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2
[warn]     val longAcc = sc.accumulator(0L)
[warn]                                 ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:100: class Accumulator in package spark is deprecated: use AccumulatorV2
[warn]     val acc: Accumulator[Int] = sc.accumulator(0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:100: method accumulator in class SparkContext is deprecated: use AccumulatorV2
[warn]     val acc: Accumulator[Int] = sc.accumulator(0)
[warn]                                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:100: object IntAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2
[warn]     val acc: Accumulator[Int] = sc.accumulator(0)
[warn]                                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:112: class Accumulable in package spark is deprecated: use AccumulatorV2
[warn]       val acc: Accumulable[mutable.Set[Any], Any] = sc.accumulable(new mutable.HashSet[Any]())
[warn]                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:112: method accumulable in class SparkContext is deprecated: use AccumulatorV2
[warn]       val acc: Accumulable[mutable.Set[Any], Any] = sc.accumulable(new mutable.HashSet[Any]())
[warn]                                                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:129: class Accumulable in package spark is deprecated: use AccumulatorV2
[warn]       val acc: Accumulable[mutable.Set[Any], Any] = sc.accumulable(new mutable.HashSet[Any]())
[warn]                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:129: method accumulable in class SparkContext is deprecated: use AccumulatorV2
[warn]       val acc: Accumulable[mutable.Set[Any], Any] = sc.accumulable(new mutable.HashSet[Any]())
[warn]                                                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:145: method accumulableCollection in class SparkContext is deprecated: use AccumulatorV2
[warn]       val setAcc = sc.accumulableCollection(mutable.HashSet[Int]())
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:146: method accumulableCollection in class SparkContext is deprecated: use AccumulatorV2
[warn]       val bufferAcc = sc.accumulableCollection(mutable.ArrayBuffer[Int]())
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:147: method accumulableCollection in class SparkContext is deprecated: use AccumulatorV2
[warn]       val mapAcc = sc.accumulableCollection(mutable.HashMap[Int, String]())
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:170: class Accumulable in package spark is deprecated: use AccumulatorV2
[warn]       val acc: Accumulable[mutable.Set[Any], Any] = sc.accumulable(new mutable.HashSet[Any]())
[warn]                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:170: method accumulable in class SparkContext is deprecated: use AccumulatorV2
[warn]       val acc: Accumulable[mutable.Set[Any], Any] = sc.accumulable(new mutable.HashSet[Any]())
[warn]                                                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:184: class Accumulable in package spark is deprecated: use AccumulatorV2
[warn]     var acc: Accumulable[mutable.Set[Any], Any] = sc.accumulable(new mutable.HashSet[Any]())
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:184: method accumulable in class SparkContext is deprecated: use AccumulatorV2
[warn]     var acc: Accumulable[mutable.Set[Any], Any] = sc.accumulable(new mutable.HashSet[Any]())
[warn]                                                      ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:225: class Accumulator in package spark is deprecated: use AccumulatorV2
[warn]     val acc = new Accumulator("", StringAccumulatorParam, Some("darkness"))
[warn]                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:225: object StringAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2
[warn]     val acc = new Accumulator("", StringAccumulatorParam, Some("darkness"))
[warn]                                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/status/AppStatusListenerSuite.scala:1194: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]       SparkListenerTaskEnd(stage1.stageId, stage1.attemptId, "taskType", Success, tasks(1), null))
[warn]                                                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/status/AppStatusListenerSuite.scala:1198: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]       SparkListenerTaskEnd(stage1.stageId, stage1.attemptId, "taskType", Success, tasks(0), null))
[warn]                                                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/status/AppStatusListenerSuite.scala:1264: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]       SparkListenerTaskEnd(stage1.stageId, stage1.attemptId, "taskType", Success, tasks(0), null))
[warn]                                                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/status/AppStatusListenerSuite.scala:1278: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]       SparkListenerTaskEnd(stage1.stageId, stage1.attemptId, "taskType",
[warn]                                                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/util/AccumulatorV2Suite.scala:132: object StringAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2
[warn]     val acc = new LegacyAccumulatorWrapper("default", AccumulatorParam.StringAccumulatorParam)
[warn]                                                                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/util/AccumulatorV2Suite.scala:132: object AccumulatorParam in package spark is deprecated: use AccumulatorV2
[warn]     val acc = new LegacyAccumulatorWrapper("default", AccumulatorParam.StringAccumulatorParam)
[warn]                                                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/util/AccumulatorV2Suite.scala:168: trait AccumulatorParam in package spark is deprecated: use AccumulatorV2
[warn]     val param = new AccumulatorParam[MyData] {
[warn]                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/sparktest/ImplicitSuite.scala:79: method accumulator in class SparkContext is deprecated: use AccumulatorV2
[warn]     sc.accumulator(123.4)
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/sparktest/ImplicitSuite.scala:79: object DoubleAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2
[warn]     sc.accumulator(123.4)
[warn]                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/sparktest/ImplicitSuite.scala:84: method accumulator in class SparkContext is deprecated: use AccumulatorV2
[warn]     sc.accumulator(123)
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/sparktest/ImplicitSuite.scala:84: object IntAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2
[warn]     sc.accumulator(123)
[warn]                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/sparktest/ImplicitSuite.scala:89: method accumulator in class SparkContext is deprecated: use AccumulatorV2
[warn]     sc.accumulator(123L)
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/sparktest/ImplicitSuite.scala:89: object LongAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2
[warn]     sc.accumulator(123L)
[warn]                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/sparktest/ImplicitSuite.scala:94: method accumulator in class SparkContext is deprecated: use AccumulatorV2
[warn]     sc.accumulator(123F)
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/sparktest/ImplicitSuite.scala:94: object FloatAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2
[warn]     sc.accumulator(123F)
[warn]                   ^
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.7.0.Final, 3.6.2.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.7.0.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-assembly/target/scala-2.11/spark-streaming-kafka-0-10-assembly_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-assembly/target/scala-2.11/spark-streaming-kafka-0-10-assembly_2.11-2.4.9-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9
[warn] 	    +- org.apache.arrow:arrow-vector:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.arrow:arrow-memory:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-network-common_2.11:2.4.9-SNAPSHOT (depends on 1.3.9)
[warn] 	    +- org.apache.hadoop:hadoop-common:2.6.5              (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-unsafe_2.11:2.4.9-SNAPSHOT  (depends on 1.3.9)
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9
[warn] 	    +- org.apache.arrow:arrow-vector:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.arrow:arrow-memory:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-network-common_2.11:2.4.9-SNAPSHOT (depends on 1.3.9)
[warn] 	    +- org.apache.hadoop:hadoop-common:2.6.5              (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-unsafe_2.11:2.4.9-SNAPSHOT  (depends on 1.3.9)
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/catalyst/target/scala-2.11/spark-catalyst_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 347 Scala sources and 93 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/target/scala-2.11/classes...
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/calcite/calcite-core/1.2.0-incubating/calcite-core-1.2.0-incubating.jar ...
[info] 	[SUCCESSFUL ] org.apache.calcite#calcite-core;1.2.0-incubating!calcite-core.jar (605ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/calcite/calcite-linq4j/1.2.0-incubating/calcite-linq4j-1.2.0-incubating.jar ...
[info] 	[SUCCESSFUL ] org.apache.calcite#calcite-linq4j;1.2.0-incubating!calcite-linq4j.jar (438ms)
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9
[warn] 	    +- org.apache.arrow:arrow-vector:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.arrow:arrow-memory:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-network-common_2.11:2.4.9-SNAPSHOT (depends on 1.3.9)
[warn] 	    +- org.apache.hadoop:hadoop-common:2.6.5              (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-hive_2.11:2.4.9-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-unsafe_2.11:2.4.9-SNAPSHOT  (depends on 1.3.9)
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.7.0.Final, 3.6.2.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.7.0.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8-assembly/target/scala-2.11/spark-streaming-kafka-0-8-assembly_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8-assembly/target/scala-2.11/spark-streaming-kafka-0-8-assembly_2.11-2.4.9-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:128: value ENABLE_JOB_SUMMARY in object ParquetOutputFormat is deprecated: see corresponding Javadoc for more information.
[warn]       && conf.get(ParquetOutputFormat.ENABLE_JOB_SUMMARY) == null) {
[warn]                                       ^
[warn] 40 warnings found
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.5.12.Final, 3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 	    +- org.apache.flume:flume-ng-core:1.6.0               (depends on 3.5.12.Final)
[warn] 	    +- org.apache.flume:flume-ng-sdk:1.6.0                (depends on 3.5.12.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}repl...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}hive-thriftserver...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}examples...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-assembly/target/scala-2.11/spark-streaming-flume-assembly_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-assembly/target/scala-2.11/spark-streaming-flume-assembly_2.11-2.4.9-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Note: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/java/test/org/apache/spark/JavaAPISuite.java uses or overrides a deprecated API.
[info] Note: Recompile with -Xlint:deprecation for details.
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Compiling 6 Scala sources and 4 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/target/scala-2.11/test-classes...
[info] Compiling 5 Scala sources and 3 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/target/scala-2.11/test-classes...
[info] Compiling 10 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/target/scala-2.11/test-classes...
[info] Compiling 28 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/kubernetes/core/target/scala-2.11/test-classes...
[info] Compiling 20 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/graphx/target/scala-2.11/test-classes...
[info] Compiling 3 Scala sources and 3 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/target/scala-2.11/test-classes...
[info] Compiling 40 Scala sources and 9 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/streaming/target/scala-2.11/test-classes...
[info] Compiling 23 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/yarn/target/scala-2.11/test-classes...
[info] Compiling 201 Scala sources and 5 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/catalyst/target/scala-2.11/test-classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/spark-core_2.11-2.4.9-SNAPSHOT-tests.jar ...
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/test/scala/org/apache/spark/streaming/flume/FlumePollingStreamSuite.scala:117: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]       FlumeUtils.createPollingStream(ssc, addresses, StorageLevel.MEMORY_AND_DISK,
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/test/scala/org/apache/spark/streaming/flume/FlumeStreamSuite.scala:83: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val flumeStream = FlumeUtils.createStream(
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:96: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:103: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     var offsetRanges = Array[OffsetRange]()
[warn]                              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:107: trait HasOffsetRanges in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges = rdd.asInstanceOf[HasOffsetRanges].offsetRanges
[warn]                                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:148: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val kc = new KafkaCluster(kafkaParams)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:163: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:194: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val kc = new KafkaCluster(kafkaParams)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:209: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder, String](
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:251: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:340: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:414: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val kc = new KafkaCluster(kafkaParams)
[warn]                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:494: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val kc = new KafkaCluster(kafkaParams)
[warn]                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:565: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       kafkaStream: DStream[(K, V)]): Seq[(Time, Array[OffsetRange])] = {
[warn]                                      ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaClusterSuite.scala:30: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   private var kc: KafkaCluster = null
[warn]                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaClusterSuite.scala:40: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     kc = new KafkaCluster(Map("metadata.broker.list" -> kafkaTestUtils.brokerAddress))
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:64: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val offsetRanges = Array(OffsetRange(topic, 0, 0, messages.size))
[warn]                              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:66: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val rdd = KafkaUtils.createRDD[String, String, StringDecoder, StringDecoder](
[warn]               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:80: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val emptyRdd = KafkaUtils.createRDD[String, String, StringDecoder, StringDecoder](
[warn]                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:81: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       sc, kafkaParams, Array(OffsetRange(topic, 0, 0, 0)))
[warn]                              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:86: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val badRanges = Array(OffsetRange(topic, 0, 0, messages.size + 1))
[warn]                           ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:88: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       KafkaUtils.createRDD[String, String, StringDecoder, StringDecoder](
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:102: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val kc = new KafkaCluster(kafkaParams)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:113: trait HasOffsetRanges in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val ranges = rdd.get.asInstanceOf[HasOffsetRanges].offsetRanges
[warn]                                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:148: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   private def getRdd(kc: KafkaCluster, topics: Set[String]) = {
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:161: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]               OffsetRange(tp.topic, tp.partition, fromOffset, until(tp).offset)
[warn]               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:165: object Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]               tp -> Broker(lo.host, lo.port)
[warn]                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:168: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]           KafkaUtils.createRDD[String, String, StringDecoder, StringDecoder, String](
[warn]           ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaStreamSuite.scala:66: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val stream = KafkaUtils.createStream[String, String, StringDecoder, StringDecoder](
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/ReliableKafkaStreamSuite.scala:96: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val stream = KafkaUtils.createStream[String, String, StringDecoder, StringDecoder](
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/ReliableKafkaStreamSuite.scala:130: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val stream = KafkaUtils.createStream[String, String, StringDecoder, StringDecoder](
[warn]                  ^
[warn] two warnings found
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/test/scala/org/apache/spark/streaming/kafka010/DirectKafkaStreamSuite.scala:253: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]       s.consumer.poll(0)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/test/scala/org/apache/spark/streaming/kafka010/DirectKafkaStreamSuite.scala:309: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]       s.consumer.poll(0)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/test/scala/org/apache/spark/streaming/kafka010/DirectKafkaStreamSuite.scala:473: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     consumer.poll(0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/test/scala/org/apache/spark/streaming/kafka010/KafkaTestUtils.scala:60: class ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead.
[warn]   private var zkUtils: ZkUtils = _
[warn]                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/test/scala/org/apache/spark/streaming/kafka010/KafkaTestUtils.scala:88: class ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead.
[warn]   def zookeeperClient: ZkUtils = {
[warn]                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/test/scala/org/apache/spark/streaming/kafka010/KafkaTestUtils.scala:100: object ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead.
[warn]     zkUtils = ZkUtils(s"$zkHost:$zkPort", zkSessionTimeout, zkConnectionTimeout, false)
[warn]               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/test/scala/org/apache/spark/streaming/kafka010/KafkaTestUtils.scala:178: method createTopic in object AdminUtils is deprecated: This method is deprecated and will be replaced by kafka.zk.AdminZkClient.
[warn]     AdminUtils.createTopic(zkUtils, topic, partitions, 1, config)
[warn]                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/kubernetes/core/src/test/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorPodsAllocatorSuite.scala:168: non-variable type argument org.apache.spark.deploy.k8s.KubernetesExecutorSpecificConf in type org.apache.spark.deploy.k8s.KubernetesConf[org.apache.spark.deploy.k8s.KubernetesExecutorSpecificConf] is unchecked since it is eliminated by erasure
[warn]         if (!argument.isInstanceOf[KubernetesConf[KubernetesExecutorSpecificConf]]) {
[warn]                                   ^
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/target/scala-2.11/spark-streaming-flume_2.11-2.4.9-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:113: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]         Resource.newBuilder().setRole("*")
[warn]                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:116: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]         Resource.newBuilder().setRole("*")
[warn]                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:121: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]         Resource.newBuilder().setRole("role2")
[warn]                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:124: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]         Resource.newBuilder().setRole("role2")
[warn]                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:138: method valueOf in object Status is deprecated: see corresponding Javadoc for more information.
[warn]     ).thenReturn(Status.valueOf(1))
[warn]                         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:151: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]     assert(cpus.exists(_.getRole() == "role2"))
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:152: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]     assert(cpus.exists(_.getRole() == "*"))
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:155: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]     assert(mem.exists(_.getRole() == "role2"))
[warn]                         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:156: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]     assert(mem.exists(_.getRole() == "*"))
[warn]                         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:417: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]         Resource.newBuilder().setRole("*")
[warn]                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:420: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]         Resource.newBuilder().setRole("*")
[warn]                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:271: method valueOf in object Status is deprecated: see corresponding Javadoc for more information.
[warn]     ).thenReturn(Status.valueOf(1))
[warn]                         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:272: method valueOf in object Status is deprecated: see corresponding Javadoc for more information.
[warn]     when(driver.declineOffer(mesosOffers.get(1).getId)).thenReturn(Status.valueOf(1))
[warn]                                                                           ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:273: method valueOf in object Status is deprecated: see corresponding Javadoc for more information.
[warn]     when(driver.declineOffer(mesosOffers.get(2).getId)).thenReturn(Status.valueOf(1))
[warn]                                                                           ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:299: method valueOf in object Status is deprecated: see corresponding Javadoc for more information.
[warn]     when(driver.declineOffer(mesosOffers2.get(0).getId)).thenReturn(Status.valueOf(1))
[warn]                                                                            ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:325: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       .setRole("prod")
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:329: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       .setRole("prod")
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:334: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       .setRole("dev")
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:339: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       .setRole("dev")
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:380: method valueOf in object Status is deprecated: see corresponding Javadoc for more information.
[warn]     ).thenReturn(Status.valueOf(1))
[warn]                         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:397: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]     assert(cpusDev.getRole.equals("dev"))
[warn]                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:400: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]       r.getName.equals("mem") && r.getScalar.getValue.equals(484.0) && r.getRole.equals("prod")
[warn]                                                                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:403: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]       r.getName.equals("cpus") && r.getScalar.getValue.equals(1.0) && r.getRole.equals("prod")
[warn]                                                                         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtilsSuite.scala:54: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]     role.foreach { r => builder.setRole(r) }
[warn]                                 ^
[warn] 29 warnings found
[warn] 7 warnings found
[info] Note: Some input files use or override a deprecated API.
[info] Note: Recompile with -Xlint:deprecation for details.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/target/scala-2.11/spark-streaming-kafka-0-8_2.11-2.4.9-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/target/scala-2.11/spark-streaming-kafka-0-10_2.11-2.4.9-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] one warning found
[warn] 24 warnings found
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/target/scala-2.11/spark-mesos_2.11-2.4.9-SNAPSHOT-tests.jar ...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/kubernetes/core/target/scala-2.11/spark-kubernetes_2.11-2.4.9-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Done packaging.
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/yarn/target/scala-2.11/spark-yarn_2.11-2.4.9-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/graphx/target/scala-2.11/spark-graphx_2.11-2.4.9-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/spark-project/hive/hive-cli/1.2.1.spark2/hive-cli-1.2.1.spark2.jar ...
[info] 	[SUCCESSFUL ] org.spark-project.hive#hive-cli;1.2.1.spark2!hive-cli.jar (33ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/spark-project/hive/hive-jdbc/1.2.1.spark2/hive-jdbc-1.2.1.spark2.jar ...
[info] 	[SUCCESSFUL ] org.spark-project.hive#hive-jdbc;1.2.1.spark2!hive-jdbc.jar (23ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/spark-project/hive/hive-beeline/1.2.1.spark2/hive-beeline-1.2.1.spark2.jar ...
[info] 	[SUCCESSFUL ] org.spark-project.hive#hive-beeline;1.2.1.spark2!hive-beeline.jar (33ms)
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9
[warn] 	    +- org.apache.arrow:arrow-vector:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.arrow:arrow-memory:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-network-common_2.11:2.4.9-SNAPSHOT (depends on 1.3.9)
[warn] 	    +- org.apache.hadoop:hadoop-common:2.6.5              (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-hive_2.11:2.4.9-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-unsafe_2.11:2.4.9-SNAPSHOT  (depends on 1.3.9)
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:360: class ParquetInputSplit in package hadoop is deprecated: see corresponding Javadoc for more information.
[warn]         new org.apache.parquet.hadoop.ParquetInputSplit(
[warn]                                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:371: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]         ParquetFileReader.readFooter(sharedConf, filePath, SKIP_ROW_GROUPS).getFileMetaData
[warn]                           ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:544: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]           ParquetFileReader.readFooter(
[warn]                             ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn]                                                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn]            ^
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/com/github/scopt/scopt_2.11/3.7.0/scopt_2.11-3.7.0.jar ...
[info] 	[SUCCESSFUL ] com.github.scopt#scopt_2.11;3.7.0!scopt_2.11.jar (240ms)
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9
[warn] 	    +- org.apache.arrow:arrow-vector:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.arrow:arrow-memory:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-network-common_2.11:2.4.9-SNAPSHOT (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-hive_2.11:2.4.9-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.hadoop:hadoop-common:2.6.5              (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-unsafe_2.11:2.4.9-SNAPSHOT  (depends on 1.3.9)
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Compiling 8 Scala sources and 2 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/target/scala-2.11/test-classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/streaming/target/scala-2.11/spark-streaming_2.11-2.4.9-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/test/scala/org/apache/spark/streaming/kinesis/KinesisInputDStreamBuilderSuite.scala:163: method initialPositionInStream in class Builder is deprecated: use initialPosition(initialPosition: KinesisInitialPosition)
[warn]         .initialPositionInStream(InitialPositionInStream.AT_TIMESTAMP)
[warn]          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/test/scala/org/apache/spark/streaming/kinesis/KinesisStreamSuite.scala:103: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]     val kinesisStream1 = KinesisUtils.createStream(ssc, "myAppName", "mySparkStream",
[warn]                                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/test/scala/org/apache/spark/streaming/kinesis/KinesisStreamSuite.scala:106: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]     val kinesisStream2 = KinesisUtils.createStream(ssc, "myAppName", "mySparkStream",
[warn]                                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/test/scala/org/apache/spark/streaming/kinesis/KinesisStreamSuite.scala:113: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]     val inputStream = KinesisUtils.createStream(ssc, appName, "dummyStream",
[warn]                                    ^
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9
[warn] 	    +- org.apache.arrow:arrow-vector:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.arrow:arrow-memory:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-network-common_2.11:2.4.9-SNAPSHOT (depends on 1.3.9)
[warn] 	    +- org.apache.hadoop:hadoop-common:2.6.5              (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-unsafe_2.11:2.4.9-SNAPSHOT  (depends on 1.3.9)
[warn] 
[warn] 	* org.scala-lang.modules:scala-parser-combinators_2.11:1.1.0 is selected over 1.0.4
[warn] 	    +- org.apache.spark:spark-mllib_2.11:2.4.9-SNAPSHOT   (depends on 1.1.0)
[warn] 	    +- org.apache.spark:spark-catalyst_2.11:2.4.9-SNAPSHOT (depends on 1.1.0)
[warn] 	    +- org.scala-lang:scala-compiler:2.11.12              (depends on 1.0.4)
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}assembly...
[warn] four warnings found
[info] Note: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/test/java/org/apache/spark/streaming/kinesis/JavaKinesisInputDStreamBuilderSuite.java uses or overrides a deprecated API.
[info] Note: Recompile with -Xlint:deprecation for details.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/target/scala-2.11/spark-streaming-kinesis-asl_2.11-2.4.9-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9
[warn] 	    +- org.apache.arrow:arrow-vector:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.arrow:arrow-memory:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-network-common_2.11:2.4.9-SNAPSHOT (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-hive_2.11:2.4.9-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.hadoop:hadoop-common:2.6.5              (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-unsafe_2.11:2.4.9-SNAPSHOT  (depends on 1.3.9)
[warn] 
[warn] 	* org.scala-lang.modules:scala-parser-combinators_2.11:1.1.0 is selected over 1.0.4
[warn] 	    +- org.scala-lang:scala-compiler:2.11.12              (depends on 1.0.4)
[warn] 	    +- org.apache.spark:spark-mllib_2.11:2.4.9-SNAPSHOT   (depends on 1.0.4)
[warn] 	    +- org.apache.spark:spark-catalyst_2.11:2.4.9-SNAPSHOT (depends on 1.0.4)
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[warn] 6 warnings found
[info] Note: Some input files use or override a deprecated API.
[info] Note: Recompile with -Xlint:deprecation for details.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:128: value ENABLE_JOB_SUMMARY in object ParquetOutputFormat is deprecated: see corresponding Javadoc for more information.
[warn]       && conf.get(ParquetOutputFormat.ENABLE_JOB_SUMMARY) == null) {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:360: class ParquetInputSplit in package hadoop is deprecated: see corresponding Javadoc for more information.
[warn]         new org.apache.parquet.hadoop.ParquetInputSplit(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:371: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]         ParquetFileReader.readFooter(sharedConf, filePath, SKIP_ROW_GROUPS).getFileMetaData
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:544: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]           ParquetFileReader.readFooter(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/target/scala-2.11/spark-sql_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 10 Scala sources and 1 Java source to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/target/scala-2.11/classes...
[info] Compiling 20 Scala sources and 1 Java source to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/target/scala-2.11/classes...
[info] Compiling 29 Scala sources and 2 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive/target/scala-2.11/classes...
[info] Compiling 304 Scala sources and 5 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/target/scala-2.11/classes...
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaDataConsumer.scala:483: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     val p = consumer.poll(pollTimeoutMs)
[warn]                      ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:119: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]     consumer.poll(0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:139: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:187: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]       consumer.poll(0)
[warn]                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:217: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]       consumer.poll(0)
[warn]                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:293: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]           consumer.poll(0)
[warn]                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/main/java/org/apache/spark/sql/avro/SparkAvroKeyOutputFormat.java:55:  [unchecked] unchecked call to SparkAvroKeyRecordWriter(Schema,GenericData,CodecFactory,OutputStream,int,Map<String,String>) as a member of the raw type SparkAvroKeyRecordWriter
[warn]       return new SparkAvroKeyRecordWriter(
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/main/java/org/apache/spark/sql/avro/SparkAvroKeyOutputFormat.java:55:  [unchecked] unchecked conversion
[warn]       return new SparkAvroKeyRecordWriter(
[warn]              ^
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/target/scala-2.11/spark-avro_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[warn] 6 warnings found
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:119: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]     consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:139: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:187: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]       consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:217: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]       consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:293: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]           consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaDataConsumer.scala:483: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     val p = consumer.poll(pollTimeoutMs)
[warn] 
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/target/scala-2.11/spark-sql-kafka-0-10_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[warn] there were 16 deprecation warnings; re-run with -deprecation for details
[warn] one warning found
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive/target/scala-2.11/spark-hive_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 12 Scala sources and 171 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/target/scala-2.11/classes...
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/src/gen/java/org/apache/hive/service/cli/thrift/TArrayTypeEntry.java:266:  [unchecked] unchecked call to read(TProtocol,T) as a member of the raw type IScheme
[warn]     schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
[warn]                                                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/where T is a type-variable:T extends TBase declared in interface IScheme
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/src/gen/java/org/apache/hive/service/cli/thrift/TArrayTypeEntry.java:270:  [unchecked] unchecked call to write(TProtocol,T) as a member of the raw type IScheme
[warn]     schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
[warn]                                                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/where T is a type-variable:T extends TBase declared in interface IScheme
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/src/gen/java/org/apache/hive/service/cli/thrift/TArrayTypeEntry.java:313:  [unchecked] getScheme() in TArrayTypeEntryStandardSchemeFactory implements <S>getScheme() in SchemeFactory
[warn]     public TArrayTypeEntryStandardScheme getScheme() {
[warn]                                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/return type requires unchecked conversion from TArrayTypeEntryStandardScheme to S
[warn]   where S is a type-variable:S extends IScheme declared in method <S>getScheme()
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/src/gen/java/org/apache/hive/service/cli/thrift/TArrayTypeEntry.java:361:  [unchecked] getScheme() in TArrayTypeEntryTupleSchemeFactory implements <S>getScheme() in SchemeFactory
[warn]     public TArrayTypeEntryTupleScheme getScheme() {
[warn]                                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/return type requires unchecked conversion from TArrayTypeEntryTupleScheme to S
[warn]   where S is a type-variable:S extends IScheme declared in method <S>getScheme()
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/src/gen/java/org/apache/hive/service/cli/thrift/TBinaryColumn.java:240:  [unchecked] unchecked cast
[warn]         setValues((List<ByteBuffer>)value);
[warn]                                     ^
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Compiling 292 Scala sources and 33 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/target/scala-2.11/test-classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/catalyst/target/scala-2.11/spark-catalyst_2.11-2.4.9-SNAPSHOT-tests.jar ...
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/target/scala-2.11/spark-hive-thriftserver_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:138: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn] object OneHotEncoder extends DefaultParamsReadable[OneHotEncoder] {
[warn]                              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:141: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]   override def load(path: String): OneHotEncoder = super.load(path)
[warn]                                    ^
[warn] two warnings found
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:138: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn] object OneHotEncoder extends DefaultParamsReadable[OneHotEncoder] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:141: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]   override def load(path: String): OneHotEncoder = super.load(path)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/target/scala-2.11/spark-mllib_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 6 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/target/scala-2.11/classes...
[info] Compiling 191 Scala sources and 128 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/examples/target/scala-2.11/classes...
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:53: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path
[warn]       if (addedClasspath != "") {
[warn]           ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:54: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path
[warn]         settings.classpath append addedClasspath
[warn]                                   ^
[warn] two warnings found
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:53: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path
[warn]       if (addedClasspath != "") {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:54: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path
[warn]         settings.classpath append addedClasspath
[warn] 
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/target/scala-2.11/spark-repl_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 3 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/target/scala-2.11/test-classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/assembly/target/scala-2.11/jars/spark-assembly_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/assembly/target/scala-2.11/jars/spark-assembly_2.11-2.4.9-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/target/scala-2.11/spark-repl_2.11-2.4.9-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/examples/target/scala-2.11/jars/spark-examples_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/examples/target/scala-2.11/jars/spark-examples_2.11-2.4.9-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/sources/TextSocketStreamSuite.scala:393: non-variable type argument String in type (String, java.sql.Timestamp) is unchecked since it is eliminated by erasure
[warn]             .isInstanceOf[(String, Timestamp)])
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/sources/TextSocketStreamSuite.scala:392: non-variable type argument String in type (String, java.sql.Timestamp) is unchecked since it is eliminated by erasure
[warn]           assert(r.get().get(0, TextSocketReader.SCHEMA_TIMESTAMP)
[warn]                 ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQueryStatusAndProgressSuite.scala:204: postfix operator minute should be enabled
[warn] by making the implicit value scala.language.postfixOps visible.
[warn] This can be achieved by adding the import clause 'import scala.language.postfixOps'
[warn] or by setting the compiler option -language:postfixOps.
[warn] See the Scaladoc for value scala.language.postfixOps for a discussion
[warn] why the feature should be explicitly enabled.
[warn]         eventually(timeout(1 minute)) {
[warn]                              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala:693: a pure expression does nothing in statement position; you may be omitting necessary parentheses
[warn]       q1
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala:230: method explode in class Dataset is deprecated: use flatMap() or select() with functions.explode() instead
[warn]       df.explode("words", "word") { word: String => word.split(" ").toSeq }.select('word),
[warn]          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala:238: method explode in class Dataset is deprecated: use flatMap() or select() with functions.explode() instead
[warn]       df.explode('letters) {
[warn]          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala:288: method explode in class Dataset is deprecated: use flatMap() or select() with functions.explode() instead
[warn]       df.explode($"*") { case Row(prefix: String, csv: String) =>
[warn]          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala:295: method explode in class Dataset is deprecated: use flatMap() or select() with functions.explode() instead
[warn]       df.explode('prefix, 'csv) { case Row(prefix: String, csv: String) =>
[warn]          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowFramesSuite.scala:228: method rangeBetween in class WindowSpec is deprecated: Use the version with Long parameter types
[warn]     val window = Window.partitionBy($"value").orderBy($"key").rangeBetween(lit(0), lit(2))
[warn]                                                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowFramesSuite.scala:242: method rangeBetween in class WindowSpec is deprecated: Use the version with Long parameter types
[warn]     val window = Window.partitionBy($"value").orderBy($"key").rangeBetween(currentRow, lit(2.5D))
[warn]                                                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowFramesSuite.scala:242: method currentRow in object functions is deprecated: Use Window.currentRow
[warn]     val window = Window.partitionBy($"value").orderBy($"key").rangeBetween(currentRow, lit(2.5D))
[warn]                                                                            ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowFramesSuite.scala:259: method rangeBetween in class WindowSpec is deprecated: Use the version with Long parameter types
[warn]       .rangeBetween(currentRow, lit(CalendarInterval.fromString("interval 23 days 4 hours")))
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowFramesSuite.scala:259: method currentRow in object functions is deprecated: Use Window.currentRow
[warn]       .rangeBetween(currentRow, lit(CalendarInterval.fromString("interval 23 days 4 hours")))
[warn]                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/ProcessingTimeSuite.scala:30: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn]     def getIntervalMs(trigger: Trigger): Long = trigger.asInstanceOf[ProcessingTime].intervalMs
[warn]                                                                      ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetCompatibilityTest.scala:49: method readAllFootersInParallel in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]       ParquetFileReader.readAllFootersInParallel(hadoopConf, parquetFiles, true)
[warn]                         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetInteroperabilitySuite.scala:178: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]                   ParquetFileReader.readFooter(hadoopConf, part.getPath, NO_FILTER)
[warn]                                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetTest.scala:133: method writeMetadataFile in object ParquetFileWriter is deprecated: see corresponding Javadoc for more information.
[warn]     ParquetFileWriter.writeMetadataFile(configuration, path, Seq(footer).asJava)
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetTest.scala:148: method writeMetadataFile in object ParquetFileWriter is deprecated: see corresponding Javadoc for more information.
[warn]     ParquetFileWriter.writeMetadataFile(configuration, path, Seq(footer).asJava)
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetTest.scala:154: method readAllFootersInParallel in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]     ParquetFileReader.readAllFootersInParallel(configuration, fs.getFileStatus(path)).asScala.toSeq
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetTest.scala:158: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]     ParquetFileReader.readFooter(
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/ProcessingTimeExecutorSuite.scala:51: class ConcurrentHashSet in package util is deprecated: see corresponding Javadoc for more information.
[warn]     val triggerTimes = new ConcurrentHashSet[Int]
[warn]                            ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/ProcessingTimeExecutorSuite.scala:55: method apply in object ProcessingTime is deprecated: use Trigger.ProcessingTime(interval)
[warn]     val executor = ProcessingTimeExecutor(ProcessingTime("1000 milliseconds"), clock)
[warn]                                           ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala:316: method apply in object ProcessingTime is deprecated: use Trigger.ProcessingTime(interval)
[warn]       StartStream(ProcessingTime("10 seconds"), new StreamManualClock),
[warn]                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala:357: method apply in object ProcessingTime is deprecated: use Trigger.ProcessingTime(interval)
[warn]       StartStream(ProcessingTime("10 seconds"), new StreamManualClock(60 * 1000)),
[warn]                   ^
[warn] 24 warnings found
[info] Note: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/java/test/org/apache/spark/sql/JavaDataFrameSuite.java uses or overrides a deprecated API.
[info] Note: Recompile with -Xlint:deprecation for details.
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Compiling 14 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/target/scala-2.11/test-classes...
[info] Compiling 5 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/target/scala-2.11/test-classes...
[info] Compiling 88 Scala sources and 17 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive/target/scala-2.11/test-classes...
[info] Compiling 9 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/target/scala-2.11/test-classes...
[info] Compiling 193 Scala sources and 66 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/target/scala-2.11/test-classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/target/scala-2.11/spark-sql_2.11-2.4.9-SNAPSHOT-tests.jar ...
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/HiveCliSessionStateSuite.scala:31: a pure expression does nothing in statement position; you may be omitting necessary parentheses
[warn]     try f finally SessionState.detachSession()
[warn]         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaContinuousTest.scala:76: reflective access of structural type member value activeTaskIdCount should be enabled
[warn] by making the implicit value scala.language.reflectiveCalls visible.
[warn] This can be achieved by adding the import clause 'import scala.language.reflectiveCalls'
[warn] or by setting the compiler option -language:reflectiveCalls.
[warn] See the Scaladoc for value scala.language.reflectiveCalls for a discussion
[warn] why the feature should be explicitly enabled.
[warn]       assert(tasksEndedListener.activeTaskIdCount.get() == 0)
[warn]                                 ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:141: constructor Field in class Field is deprecated: see corresponding Javadoc for more information.
[warn]         Seq(new Field("null", Schema.create(Type.NULL), "doc", null)).asJava
[warn]             ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:164: constructor Field in class Field is deprecated: see corresponding Javadoc for more information.
[warn]         val fields = Seq(new Field("field1", union, "doc", null)).asJava
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:192: constructor Field in class Field is deprecated: see corresponding Javadoc for more information.
[warn]         val fields = Seq(new Field("field1", union, "doc", null)).asJava
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:224: constructor Field in class Field is deprecated: see corresponding Javadoc for more information.
[warn]         val fields = Seq(new Field("field1", union, "doc", null)).asJava
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:250: constructor Field in class Field is deprecated: see corresponding Javadoc for more information.
[warn]       val fields = Seq(new Field("field1", UnionOfOne, "doc", null)).asJava
[warn]                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:303: constructor Field in class Field is deprecated: see corresponding Javadoc for more information.
[warn]         new Field("field1", complexUnionType, "doc", null),
[warn]         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:304: constructor Field in class Field is deprecated: see corresponding Javadoc for more information.
[warn]         new Field("field2", complexUnionType, "doc", null),
[warn]         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:305: constructor Field in class Field is deprecated: see corresponding Javadoc for more information.
[warn]         new Field("field3", complexUnionType, "doc", null),
[warn]         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:306: constructor Field in class Field is deprecated: see corresponding Javadoc for more information.
[warn]         new Field("field4", complexUnionType, "doc", null)
[warn]         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:978: constructor Field in class Field is deprecated: see corresponding Javadoc for more information.
[warn]       val avroField = new Field(name, avroType, "", null)
[warn]                       ^
[warn] one warning found
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/target/scala-2.11/spark-hive-thriftserver_2.11-2.4.9-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Done packaging.
[warn] 10 warnings found
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/target/scala-2.11/spark-avro_2.11-2.4.9-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:66: class ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead.
[warn]   private var zkUtils: ZkUtils = _
[warn]                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:95: class ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead.
[warn]   def zookeeperClient: ZkUtils = {
[warn]                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:107: object ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead.
[warn]     zkUtils = ZkUtils(s"$zkHost:$zkPort", zkSessionTimeout, zkConnectionTimeout, false)
[warn]               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:198: method createTopic in object AdminUtils is deprecated: This method is deprecated and will be replaced by kafka.zk.AdminZkClient.
[warn]         AdminUtils.createTopic(zkUtils, topic, partitions, 1)
[warn]                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:225: method deleteTopic in object AdminUtils is deprecated: This method is deprecated and will be replaced by kafka.zk.AdminZkClient.
[warn]     AdminUtils.deleteTopic(zkUtils, topic)
[warn]                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:290: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     kc.poll(0)
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:304: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     kc.poll(0)
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:383: object ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead.
[warn]       !zkUtils.pathExists(getDeleteTopicPath(topic)),
[warn]                           ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:384: object ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead.
[warn]       s"${getDeleteTopicPath(topic)} still exists")
[warn]           ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:385: object ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead.
[warn]     assert(!zkUtils.pathExists(getTopicPath(topic)), s"${getTopicPath(topic)} still exists")
[warn]                                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:385: object ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead.
[warn]     assert(!zkUtils.pathExists(getTopicPath(topic)), s"${getTopicPath(topic)} still exists")
[warn]                                                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:409: class ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead.
[warn]       zkUtils: ZkUtils,
[warn]                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:421: method deleteTopic in object AdminUtils is deprecated: This method is deprecated and will be replaced by kafka.zk.AdminZkClient.
[warn]           AdminUtils.deleteTopic(zkUtils, topic)
[warn]                      ^
[warn] 14 warnings found
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/target/scala-2.11/spark-sql-kafka-0-10_2.11-2.4.9-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] there were 25 deprecation warnings; re-run with -deprecation for details
[warn] one warning found
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive/src/test/java/org/apache/spark/sql/hive/test/Complex.java:464:  [unchecked] unchecked cast
[warn]         setLint((List<Integer>)value);
[warn]                                ^
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive/target/scala-2.11/spark-hive_2.11-2.4.9-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/clustering/KMeansSuite.scala:120: method computeCost in class KMeansModel is deprecated: This method is deprecated and will be removed in 3.0.0. Use ClusteringEvaluator instead. You can also get the cost on the training dataset in the summary.
[warn]     assert(model.computeCost(dataset) < 0.1)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/clustering/KMeansSuite.scala:135: method computeCost in class KMeansModel is deprecated: This method is deprecated and will be removed in 3.0.0. Use ClusteringEvaluator instead. You can also get the cost on the training dataset in the summary.
[warn]     assert(model.computeCost(dataset) == summary.trainingCost)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/clustering/KMeansSuite.scala:206: method computeCost in class KMeansModel is deprecated: This method is deprecated and will be removed in 3.0.0. Use ClusteringEvaluator instead. You can also get the cost on the training dataset in the summary.
[warn]       model.computeCost(dataset)
[warn]             ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/feature/OneHotEncoderSuite.scala:46: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]     ParamsSuite.checkParams(new OneHotEncoder)
[warn]                                 ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/feature/OneHotEncoderSuite.scala:51: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]     val encoder = new OneHotEncoder()
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/feature/OneHotEncoderSuite.scala:74: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]     val encoder = new OneHotEncoder()
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/feature/OneHotEncoderSuite.scala:96: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]     val encoder = new OneHotEncoder()
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/feature/OneHotEncoderSuite.scala:110: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]     val encoder = new OneHotEncoder()
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/feature/OneHotEncoderSuite.scala:121: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]     val t = new OneHotEncoder()
[warn]                 ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/feature/OneHotEncoderSuite.scala:156: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]       val encoder = new OneHotEncoder()
[warn]                         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:52: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     var df = readImages(imagePath)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:55: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     df = readImages(imagePath, null, true, -1, false, 1.0, 0)
[warn]          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:58: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     df = readImages(imagePath, null, true, -1, true, 1.0, 0)
[warn]          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:62: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     df = readImages(imagePath, null, true, -1, true, 0.5, 0)
[warn]          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:69: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     val df = readImages(imagePath, null, false, 3, true, 1.0, 0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:74: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     val df = readImages(imagePath + "/kittens/DP153539.jpg", null, false, 3, true, 1.0, 0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:79: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     val df = readImages(imagePath + "/multi-channel/BGRA.png", null, false, 3, true, 1.0, 0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:84: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     val df = readImages(imagePath + "/kittens/not-image.txt", null, false, 3, true, 1.0, 0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:90: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     val df = readImages(imagePath + "/kittens/not-image.txt", null, false, 3, false, 1.0, 0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:96: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]       readImages(imagePath, null, true, 3, true, 1.1, 0)
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:103: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]       readImages(imagePath, null, true, 3, true, -0.1, 0)
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:109: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     val df = readImages(imagePath, null, true, 3, true, 0.0, 0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:114: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     val df = readImages(imagePath, sparkSession = spark, true, 3, true, 1.0, 0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:119: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     val df = readImages(imagePath, null, true, 3, true, 1.0, 0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:124: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     val df = readImages(imagePath, null, true, -3, true, 1.0, 0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:129: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     val df = readImages(imagePath, null, true, 0, true, 1.0, 0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:136: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     val images = readImages(imagePath + "/multi-channel/").collect
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/classification/LogisticRegressionSuite.scala:227: constructor LogisticRegressionWithSGD in class LogisticRegressionWithSGD is deprecated: Use ml.classification.LogisticRegression or LogisticRegressionWithLBFGS
[warn]     val lr = new LogisticRegressionWithSGD().setIntercept(true)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/classification/LogisticRegressionSuite.scala:303: constructor LogisticRegressionWithSGD in class LogisticRegressionWithSGD is deprecated: Use ml.classification.LogisticRegression or LogisticRegressionWithLBFGS
[warn]     val lr = new LogisticRegressionWithSGD().setIntercept(true)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/classification/LogisticRegressionSuite.scala:338: constructor LogisticRegressionWithSGD in class LogisticRegressionWithSGD is deprecated: Use ml.classification.LogisticRegression or LogisticRegressionWithLBFGS
[warn]     val lr = new LogisticRegressionWithSGD().setIntercept(true)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/classification/LogisticRegressionSuite.scala:919: object LogisticRegressionWithSGD in package classification is deprecated: Use ml.classification.LogisticRegression or LogisticRegressionWithLBFGS
[warn]     val model = LogisticRegressionWithSGD.train(points, 2)
[warn]                 ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/clustering/KMeansSuite.scala:369: method train in object KMeans is deprecated: Use train method without 'runs'
[warn]       val model = KMeans.train(points, 2, 2, 1, initMode)
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/evaluation/MulticlassMetricsSuite.scala:80: value precision in class MulticlassMetrics is deprecated: Use accuracy.
[warn]     assert(math.abs(metrics.accuracy - metrics.precision) < delta)
[warn]                                                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/evaluation/MulticlassMetricsSuite.scala:81: value recall in class MulticlassMetrics is deprecated: Use accuracy.
[warn]     assert(math.abs(metrics.accuracy - metrics.recall) < delta)
[warn]                                                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/evaluation/MulticlassMetricsSuite.scala:82: value fMeasure in class MulticlassMetrics is deprecated: Use accuracy.
[warn]     assert(math.abs(metrics.accuracy - metrics.fMeasure) < delta)
[warn]                                                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/LassoSuite.scala:58: constructor LassoWithSGD in class LassoWithSGD is deprecated: Use ml.regression.LinearRegression with elasticNetParam = 1.0. Note the default regParam is 0.01 for LassoWithSGD, but is 0.0 for LinearRegression.
[warn]     val ls = new LassoWithSGD()
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/LassoSuite.scala:102: constructor LassoWithSGD in class LassoWithSGD is deprecated: Use ml.regression.LinearRegression with elasticNetParam = 1.0. Note the default regParam is 0.01 for LassoWithSGD, but is 0.0 for LinearRegression.
[warn]     val ls = new LassoWithSGD()
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/LassoSuite.scala:156: object LassoWithSGD in package regression is deprecated: Use ml.regression.LinearRegression with elasticNetParam = 1.0. Note the default regParam is 0.01 for LassoWithSGD, but is 0.0 for LinearRegression.
[warn]     val model = LassoWithSGD.train(points, 2)
[warn]                 ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/LinearRegressionSuite.scala:49: constructor LinearRegressionWithSGD in class LinearRegressionWithSGD is deprecated: Use ml.regression.LinearRegression or LBFGS
[warn]     val linReg = new LinearRegressionWithSGD().setIntercept(true)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/LinearRegressionSuite.scala:75: constructor LinearRegressionWithSGD in class LinearRegressionWithSGD is deprecated: Use ml.regression.LinearRegression or LBFGS
[warn]     val linReg = new LinearRegressionWithSGD().setIntercept(false)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/LinearRegressionSuite.scala:106: constructor LinearRegressionWithSGD in class LinearRegressionWithSGD is deprecated: Use ml.regression.LinearRegression or LBFGS
[warn]     val linReg = new LinearRegressionWithSGD().setIntercept(false)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/LinearRegressionSuite.scala:163: object LinearRegressionWithSGD in package regression is deprecated: Use ml.regression.LinearRegression or LBFGS
[warn]     val model = LinearRegressionWithSGD.train(points, 2)
[warn]                 ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/RidgeRegressionSuite.scala:63: constructor LinearRegressionWithSGD in class LinearRegressionWithSGD is deprecated: Use ml.regression.LinearRegression or LBFGS
[warn]     val linearReg = new LinearRegressionWithSGD()
[warn]                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/RidgeRegressionSuite.scala:71: constructor RidgeRegressionWithSGD in class RidgeRegressionWithSGD is deprecated: Use ml.regression.LinearRegression with elasticNetParam = 0.0. Note the default regParam is 0.01 for RidgeRegressionWithSGD, but is 0.0 for LinearRegression.
[warn]     val ridgeReg = new RidgeRegressionWithSGD()
[warn]                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/RidgeRegressionSuite.scala:113: object RidgeRegressionWithSGD in package regression is deprecated: Use ml.regression.LinearRegression with elasticNetParam = 0.0. Note the default regParam is 0.01 for RidgeRegressionWithSGD, but is 0.0 for LinearRegression.
[warn]     val model = RidgeRegressionWithSGD.train(points, 2)
[warn]                 ^
[warn] 45 warnings found
[info] Note: Some input files use or override a deprecated API.
[info] Note: Recompile with -Xlint:deprecation for details.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/target/scala-2.11/spark-mllib_2.11-2.4.9-SNAPSHOT-tests.jar ...
[info] Done packaging.
[success] Total time: 701 s, completed May 7, 2021 9:35:37 AM
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/SSLOptions.scala:71: constructor SslContextFactory in class SslContextFactory is deprecated: see corresponding Javadoc for more information.
[warn]       val sslContextFactory = new SslContextFactory()
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:161: method isRunningLocally in class TaskContext is deprecated: Local execution was removed, so this always returns false
[warn]   override def isRunningLocally(): Boolean = taskContext.isRunningLocally()
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/scheduler/StageInfo.scala:59: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]   def attemptNumber(): Int = attemptId
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/api/r/RBackend.scala:102: method childGroup in class ServerBootstrap is deprecated: see corresponding Javadoc for more information.
[warn]     if (bootstrap != null && bootstrap.childGroup() != null) {
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/spark-core_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:633: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder](
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:647: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:648: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker]): JavaRDD[(Array[Byte], Array[Byte])] = {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:657: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:658: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker]): JavaRDD[Array[Byte]] = {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:670: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:671: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:673: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createRDD[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V](
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:676: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges.toArray(new Array[OffsetRange](offsetRanges.size())),
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:720: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val kc = new KafkaCluster(Map(kafkaParams.asScala.toSeq: _*))
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:721: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       KafkaUtils.getFromOffsets(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:725: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createDirectStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V](
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def createBroker(host: String, port: JInt): Broker = Broker(host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: object Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def createBroker(host: String, port: JInt): Broker = Broker(host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:740: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def offsetRangesOfKafkaRDD(rdd: RDD[_]): JList[OffsetRange] = {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:89: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   protected val kc = new KafkaCluster(kafkaParams)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:172: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       OffsetRange(tp.topic, tp.partition, fo, uo.offset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:217: object KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val leaders = KafkaCluster.checkErrors(kc.findLeaders(topics))
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:222: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]            context.sparkContext, kafkaParams, b.map(OffsetRange(_)), leaders, messageHandler)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:58: trait HasOffsetRanges in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   ) extends RDD[R](sc, Nil) with Logging with HasOffsetRanges {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val offsetRanges: Array[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val offsetRanges: Array[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:152: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val kc = new KafkaCluster(kafkaParams)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:268: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]         OffsetRange(tp.topic, tp.partition, fo, uo.offset)
[warn] 
[info] Checking every *.class/*.jar file's SHA-1.
[warn] Strategy 'discard' was applied to a file
[warn] Strategy 'filterDistinctLines' was applied to 7 files
[warn] Strategy 'first' was applied to 95 files
[info] SHA-1: 7287df7bcbeb852101f9027168ea72990fc5b17e
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8-assembly/target/scala-2.11/spark-streaming-kafka-0-8-assembly-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[success] Total time: 22 s, completed May 7, 2021 9:35:59 AM
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/SSLOptions.scala:71: constructor SslContextFactory in class SslContextFactory is deprecated: see corresponding Javadoc for more information.
[warn]       val sslContextFactory = new SslContextFactory()
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:161: method isRunningLocally in class TaskContext is deprecated: Local execution was removed, so this always returns false
[warn]   override def isRunningLocally(): Boolean = taskContext.isRunningLocally()
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/scheduler/StageInfo.scala:59: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]   def attemptNumber(): Int = attemptId
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/api/r/RBackend.scala:102: method childGroup in class ServerBootstrap is deprecated: see corresponding Javadoc for more information.
[warn]     if (bootstrap != null && bootstrap.childGroup() != null) {
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/spark-core_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:259: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val dstream = FlumeUtils.createStream(jssc, hostname, port, storageLevel, enableDecompression)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:275: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val dstream = FlumeUtils.createPollingStream(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createPollingStream(ssc, host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createPollingStream(ssc, host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumeEventCount.scala:59: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createStream(ssc, host, port, StorageLevel.MEMORY_ONLY_SER_2)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Checking every *.class/*.jar file's SHA-1.
[warn] Strategy 'discard' was applied to a file
[warn] Strategy 'filterDistinctLines' was applied to 7 files
[warn] Strategy 'first' was applied to 88 files
[info] SHA-1: 6bd8250613eb0a4ee6753818a1ce9608a846fe9b
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-assembly/target/scala-2.11/spark-streaming-flume-assembly-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[success] Total time: 21 s, completed May 7, 2021 9:36:20 AM
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/SSLOptions.scala:71: constructor SslContextFactory in class SslContextFactory is deprecated: see corresponding Javadoc for more information.
[warn]       val sslContextFactory = new SslContextFactory()
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:161: method isRunningLocally in class TaskContext is deprecated: Local execution was removed, so this always returns false
[warn]   override def isRunningLocally(): Boolean = taskContext.isRunningLocally()
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/scheduler/StageInfo.scala:59: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]   def attemptNumber(): Int = attemptId
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/api/r/RBackend.scala:102: method childGroup in class ServerBootstrap is deprecated: see corresponding Javadoc for more information.
[warn]     if (bootstrap != null && bootstrap.childGroup() != null) {
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/spark-core_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:140: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]   private val client = new AmazonKinesisClient(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:150: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]   client.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:219: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information.
[warn]     getRecordsRequest.setRequestCredentials(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:240: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information.
[warn]     getShardIteratorRequest.setRequestCredentials(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:606: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]       KinesisUtils.createStream(jssc.ssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:613: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]         KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:616: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]         KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:107: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val kinesisClient = new AmazonKinesisClient(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:108: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     kinesisClient.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:223: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val kinesisClient = new AmazonKinesisClient(new DefaultAWSCredentialsProviderChain())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:224: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     kinesisClient.setEndpoint(endpoint)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/SparkAWSCredentials.scala:76: method withLongLivedCredentialsProvider in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       .withLongLivedCredentialsProvider(longLivedCreds.provider)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:58: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val client = new AmazonKinesisClient(KinesisTestUtils.getAWSCredentials())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:59: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     client.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:64: constructor AmazonDynamoDBClient in class AmazonDynamoDBClient is deprecated: see corresponding Javadoc for more information.
[warn]     val dynamoDBClient = new AmazonDynamoDBClient(new DefaultAWSCredentialsProviderChain())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:65: method setRegion in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     dynamoDBClient.setRegion(RegionUtils.getRegion(regionName))
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisReceiver.scala:187: constructor Worker in class Worker is deprecated: see corresponding Javadoc for more information.
[warn]     worker = new Worker(recordProcessorFactory, kinesisClientLibConfiguration)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Checking every *.class/*.jar file's SHA-1.
[warn] Strategy 'discard' was applied to 2 files
[warn] Strategy 'filterDistinctLines' was applied to 8 files
[warn] Strategy 'first' was applied to 50 files
[info] SHA-1: 6af6c7b29cebfe8541f67853ea8a945c596d4dd2
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl-assembly/target/scala-2.11/spark-streaming-kinesis-asl-assembly-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[success] Total time: 24 s, completed May 7, 2021 9:36:44 AM

========================================================================
Detecting binary incompatibilities with MiMa
========================================================================
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.rules.RuleExecutor.Strategy
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NamedQueryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.InBlock
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.TrainValidationSplit.TrainValidationSplitReader
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.LocalLDAModel.SaveLoadV1_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.Main.MainClassOptionParser
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.util.DefaultParamsReader.Metadata
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.OpenHashMapBasedStateMap.StateInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.rpc.netty.RpcEndpointVerifier.CheckExistence
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SetupDriver
Error instrumenting class:org.apache.spark.mapred.SparkHadoopMapRedUtil$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.MultilayerPerceptronClassifierWrapper.MultilayerPerceptronClassifierWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.StringIndexModelWriter.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherProtocol.Hello
[WARN] Unable to detect inner functions for class:org.apache.spark.security.CryptoStreamUtils.CryptoHelperChannel
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ImputerModel.ImputerModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IdentifierCommentListContext
[WARN] Unable to detect inner functions for class:org.apache.spark.util.SignalUtils.ActionHandler
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IntervalContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QuerySpecificationContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator17$1
[WARN] Unable to detect inner functions for class:org.apache.spark.util.Benchmark.Result
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.plans
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.DateAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.OneHotEncoderModel.OneHotEncoderModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.ImplicitTypeCasts
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveRdd
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleTableIdentifierContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.ZooKeeperLeaderElectionAgent.LeadershipStatus
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.UnsetTablePropertiesContext
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillTask
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AggregationContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LogisticRegressionWrapper.LogisticRegressionWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IntervalValueContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.FeatureHasher.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel.MultilayerPerceptronClassificationModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.RunLengthEncoding.Decoder
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LogisticRegressionModel.LogisticRegressionModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterChanged
[WARN] Unable to detect inner functions for class:org.apache.spark.rdd.DefaultPartitionCoalescer.PartitionLocations
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.StateStoreAwareZipPartitionsHelper
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.GeneralizedLinearRegressionModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.SparkAppHandle.Listener
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.debug.DebugExec
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LDAWrapper.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.api.python.SerDeUtil.ArrayConstructor
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestWorkerState
Error instrumenting class:org.apache.spark.sql.execution.streaming.HDFSMetadataLog$FileSystemManager
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.KVTypeInfo.Accessor
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.UnsafeSorterSpillMerger.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisteredWorker
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterApplication
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FromClauseContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateKeyWatermarkPredicate
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ColumnReferenceContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterChangeAcknowledged
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QueryTermDefaultContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ComparisonContext
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetMatchingBlockIds
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutors
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.PythonMLLibAPI.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ApplicationRemoved
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyAndNumValues
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QueryTermContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0.Data
Error instrumenting class:org.apache.spark.input.StreamInputFormat
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.CaseWhenCoercion
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.RetryingBlockFetcher.BlockFetchStarter
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RetrieveSparkAppConfig
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ProbabilisticClassificationModel.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator16$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.BisectingKMeansModel.$$typecreator1$1
Error instrumenting class:org.apache.spark.mllib.regression.IsotonicRegressionModel$SaveLoadV1_0$
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.PrefixSpan.Prefix
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyToNumValuesType
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.InMemoryFileIndex.SerializableBlockLocation
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.stat.test.ChiSqTest.Method
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ExtractWindowExpressions
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator8$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.rules.RuleExecutor.Batch
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.ChiSquareTest.ChiSquareResult
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.LevelDB.PrefixCache
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AliasedRelationContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.PrefixSpan.FreqSequence
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.DecisionTreeRegressionModel.DecisionTreeRegressionModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.RelationalGroupedDataset.CubeType
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator9$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveGroupingAnalytics
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Error instrumenting class:org.apache.spark.deploy.SparkSubmit$
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.NodeData
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LogisticRegressionModel.LogisticRegressionModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.Pipeline.PipelineReader
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.ArrayWrappers.ComparableObjectArray
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.Division
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator7$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ClassificationModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.KMeansModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.scheduler.ReceiverTracker.TrackerState
21/05/07 09:37:34 WARN Utils: Your hostname, research-jenkins-worker-01 resolves to a loopback address: 127.0.1.1; using 192.168.10.21 instead (on interface igb0)
21/05/07 09:37:34 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.streaming.StreamingQueryListener.QueryTerminatedEvent
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ParenthesizedExpressionContext
Error instrumenting class:org.apache.spark.mllib.clustering.LocalLDAModel$SaveLoadV1_0$
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableProviderContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.ResolveHints.ResolveBroadcastHints
Error instrumenting class:org.apache.spark.sql.execution.command.DDLUtils$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.DecisionTreeRegressorWrapper.DecisionTreeRegressorWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.UnregisterApplication
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SampleByBytesContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterWorkerFailed
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.SplitData
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LDAWrapper.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.optimizer.JoinReorderDP.JoinPlan
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.expressions
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateFunctionContext
Error instrumenting class:org.apache.spark.sql.execution.streaming.CommitLog
[WARN] Unable to detect inner functions for class:org.apache.spark.network.client.TransportClient.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LDAWrapper.LDAWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.LocationSpecContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.PartitioningUtils.PartitionValues
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper.GeneralizedLinearRegressionWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.IDFModelWriter.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.DriverStatusResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.optimization.LBFGS.CostFun
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.OneHotEncoderModel.OneHotEncoderModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TablePropertyKeyContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.param.shared.SharedParamsCodeGen.ParamDesc
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.GaussianMixtureModel.SaveLoadV1_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.HDFSBackedStateStore.COMMITTED
[WARN] Unable to detect inner functions for class:org.apache.spark.network.client.TransportClientFactory.ClientPool
Error instrumenting class:org.apache.spark.scheduler.SplitInfo$
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.SparkBuildInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AddTablePartitionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SmallIntLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.python.MLSerDe.SparseMatrixPickler
Error instrumenting class:org.apache.spark.api.python.DoubleArrayWritable
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.TrainValidationSplitModel.TrainValidationSplitModelReader
Error instrumenting class:org.apache.spark.sql.execution.datasources.orc.OrcUtils$
[WARN] Unable to detect inner functions for class:org.apache.spark.api.java.JavaUtils.SerializableMapWrapper
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyToNumValuesStore
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.DecisionTreeRegressorWrapper.DecisionTreeRegressorWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.AnalysisErrorAt
[WARN] Unable to detect inner functions for class:org.apache.spark.ui.JettyUtils.ServletParams
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.rules.RuleExecutor.Once
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RepairTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.EnsembleModelReadWrite.EnsembleNodeData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.DataSource.SourceInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.SparkSubmitUtils.MavenCoordinate
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.aggregate.ApproximatePercentile.PercentileDigest
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NamedExpressionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RowConstructorContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.OptimizeMetadataOnlyQuery.PartitionedRelation
Error instrumenting class:org.apache.spark.deploy.SparkHadoopUtil$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorSizeHint.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.AssociationRules.Rule
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette.ClusterStats
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.RepeatedGroupConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeans.ClusterSummaryAggregator
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.CheckpointWriter.CheckpointWriteHandler
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.WindowDefContext
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalSorter.SpillReader
[WARN] Unable to detect inner functions for class:org.apache.spark.rpc.netty.Dispatcher.EndpointData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveNewInstance
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.fpm.FPGrowthModel.FPGrowthModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.DateConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.optim.WeightedLeastSquares.Aggregator
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalSorter.IteratorForPartition
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.MaxAbsScalerModelWriter.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.MapConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetDatabasePropertiesContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetQuantifierContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.MemorySink.AddedData
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.client.StandaloneAppClient.ClientEndpoint
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveShuffle
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CtesContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.plans.DslLogicalPlan
[WARN] Unable to detect inner functions for class:org.apache.spark.annotation.InterfaceStability.Evolving
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GBTRegressorWrapper.GBTRegressorWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.KMeansWrapper.KMeansWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.PowerIterationClusteringModel.SaveLoadV1_0.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.scheduler.ReceiverTracker.ReceiverTrackerEndpoint
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.BisectingKMeansWrapper.BisectingKMeansWrapperWriter
Error instrumenting class:org.apache.spark.sql.execution.streaming.HDFSMetadataLog$FileContextManager
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.IdentityProjection
Error instrumenting class:org.apache.spark.internal.io.HadoopMapRedCommitProtocol
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.UnsafeInMemorySorter.$SortedIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.MultiInsertQueryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.PCAModelWriter.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.fpm.FPGrowthModel.FPGrowthModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel.MultilayerPerceptronClassificationModelWriter.Data
Error instrumenting class:org.apache.spark.sql.execution.streaming.OffsetSeqLog
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.ChiSquareTest.$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SubqueryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.WidenSetOperationTypes
[WARN] Unable to detect inner functions for class:org.apache.spark.network.crypto.TransportCipher.EncryptionHandler
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LDAModel.$$typecreator2$1
Error instrumenting class:org.apache.spark.internal.io.HadoopMapReduceWriteConfigUtil
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GBTRegressionModel.GBTRegressionModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LogisticRegressionWrapper.LogisticRegressionWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.DateTimeOperations
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SetupDriver
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetMatchingBlockIds
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyWithIndexAndValue
[WARN] Unable to detect inner functions for class:org.apache.spark.InternalAccumulator.output
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.CatalystTypeConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.SparkConf.DeprecatedConfig
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StrictIdentifierContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ComparisonOperatorContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.EltCoercion
[WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SparkSaslClient.$ClientCallbackHandler
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.DriverStatusResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutorResponse
Error instrumenting class:org.apache.spark.launcher.InProcessLauncher
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.BlacklistTracker.ExecutorFailureList
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.ResolveTableValuedFunctions.ArgumentList
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.receiver.BlockGenerator.Block
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SparkAppConfig
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator7$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.sources.MemorySinkV2.AddedData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.BigIntLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.network.server.TransportServer.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.Count
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RetrieveLastAllocatedExecutorId
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StopWordsRemover.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.SignedPrefixComparatorDesc
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator11$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.LongDelta.Encoder
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.GaussianMixtureModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.FloatConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.BinaryPrefixComparator
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.CountVectorizerModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.rdd.InputFileBlockHolder.FileBlock
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FailNativeCommandContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ExpressionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCheckResult.TypeCheckFailure
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleTableSchemaContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.optimizer.StarSchemaDetection.TableAccessCardinality
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator12$1
[WARN] Unable to detect inner functions for class:org.apache.spark.util.Benchmark.Timer
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.StringIndexModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.optim.WeightedLeastSquares.Cholesky
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LDAWrapper.LDAWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PredicateOperatorContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.stat.test.KolmogorovSmirnovTest.NullHypothesis
Error instrumenting class:org.apache.spark.deploy.master.ui.MasterWebUI
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.RelationalGroupedDataset.GroupType
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.ArrayConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Family
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.history.AppListingListener.MutableAttemptInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.BlockManagerHeartbeat
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.DecisionTreeClassificationModel.DecisionTreeClassificationModelWriter.$$typecreator1$1
Error instrumenting class:org.apache.spark.sql.execution.datasources.DataSource$
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StopExecutors
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillDriver
Error instrumenting class:org.apache.spark.mllib.clustering.GaussianMixtureModel$SaveLoadV1_0$
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.GaussianMixtureModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.RelationalGroupedDataset.RollupType
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DropTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.mesos.MesosExternalShuffleClient.1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableValuedFunctionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.IntDelta.Decoder
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.StopBlockManagerMaster
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.KMeansModelReader.$$typecreator7$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.ALSModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.OneForOneBlockFetcher.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.aggregate.TypedAverage.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeFixedWidthAggregationMap.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.Min
Error instrumenting class:org.apache.spark.sql.execution.streaming.state.StateStoreProvider$
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.StorageStatus.NonRddStorageInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ColTypeContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.OrderedIdentifierContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.FileFormatWriter.EmptyDirectoryWriteTask
21/05/07 09:37:36 WARN BLAS: Failed to load implementation from: com.github.fommil.netlib.NativeSystemBLAS
21/05/07 09:37:36 WARN BLAS: Failed to load implementation from: com.github.fommil.netlib.NativeRefBLAS
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.FileFormatWriter.OutputSpec
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.RandomForestClassificationModel.RandomForestClassificationModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerLatestState
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAttributeRewriter.VectorAttributeRewriterWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.linalg.distributed.RowMatrix.$SVDMode$2$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeExternalRowSorter.PrefixComputer
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.OneHotEncoderModel.OneHotEncoderModelWriter.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.PromoteStrings
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.StreamingDeduplicationStrategy
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.ArrayWrappers.ComparableLongArray
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InsertIntoContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.RandomForestRegressionModel.RandomForestRegressionModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetLocations
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LinearSVCModel.LinearSVCWriter.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.Rating
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NumericLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PartitionValContext
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.BlockManagerHeartbeat
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QueryPrimaryDefaultContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LinearSVCModel.LinearSVCWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Tweedie
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel.BucketedRandomProjectionLSHModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.UnsignedPrefixComparator
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorSizeHint.$$typecreator3$1
Error instrumenting class:org.apache.spark.sql.execution.datasources.TextBasedFileFormat
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowDatabasesContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InlineTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.GBTClassificationModel.GBTClassificationModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager
[WARN] Unable to detect inner functions for class:org.apache.spark.network.util.TransportFrameDecoder.Interceptor
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerRemoved
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.FixedLengthRowBasedKeyValueBatch.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Bucketizer.BucketizerWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDe.LabeledPointPickler
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.CoalesceExec.EmptyPartition
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Index
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.SuccessFetchResult
[WARN] Unable to detect inner functions for class:org.apache.spark.network.util.LevelDBProvider.LevelDBLogger
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.ToBlockManagerSlave
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.CrossValidatorModel.CrossValidatorModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetArrayConverter.ElementConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleInsertQueryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.SuccessFetchResult
[WARN] Unable to detect inner functions for class:org.apache.spark.shuffle.sort.ShuffleInMemorySorter.1
[WARN] Unable to detect inner functions for class:org.apache.spark.network.client.TransportClientFactory.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.errors.TreeNodeException
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.PassThrough.Encoder
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator9$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionModel.IsotonicRegressionModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RefreshTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.RandomForestClassifierWrapper.RandomForestClassifierWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.DecisionTreeClassificationModel.DecisionTreeClassificationModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Index
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator7$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.StringIndexModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.StateStoreType
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.recommendation.MatrixFactorizationModel.SaveLoadV1_0.$typecreator13$1
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.SparkAppHandle.State
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.aggregate.TypedAverage.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.UnsignedPrefixComparatorDescNullsFirst
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutorsOnHost
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.MapConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.CTESubstitution
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TypeConstructorContext
Error instrumenting class:org.apache.spark.SSLOptions
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestDriverStatus
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.streaming.InternalOutputModes.Append
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.orc.OrcDeserializer.ArrayDataUpdater
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.SparkSubmitCommandBuilder.1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CastContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.RelationalGroupedDataset.PivotType
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.ALSModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LocalLDAModel.LocalLDAModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableAliasContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestExecutors
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Logit
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ImplicitOperators
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorSlicer.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StopExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.AFTSurvivalRegressionModelReader
Error instrumenting class:org.apache.spark.sql.execution.streaming.HDFSMetadataLog$FileManager
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.CatalogImpl.$$typecreator2$1
Error instrumenting class:org.apache.spark.input.WholeTextFileInputFormat
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveRdd
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.BooleanExpressionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveMissingReferences
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.UnquotedIdentifierContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.PrimitiveConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveNaturalAndUsingJoin
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.LaunchDriver
Error instrumenting class:org.apache.spark.deploy.history.HistoryServer
Error instrumenting class:org.apache.spark.sql.execution.streaming.ManifestFileCommitProtocol
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.CountVectorizerModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.Word2VecModel.SaveLoadV1_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateValueWatermarkPredicate
Error instrumenting class:org.apache.spark.api.python.TestOutputKeyConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.optimization.NNLS.Workspace
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DataTypeContext
[WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SparkSaslServer.1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QuotedIdentifierContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.Word2VecModel.SaveLoadV1_0
Error instrumenting class:org.apache.spark.api.python.TestWritable
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.BisectingKMeansModel.BisectingKMeansModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SparkAppConfig
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ProbabilisticClassificationModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.WriteStyle.RawStyle
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.SendHeartbeat
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerRemoved
Error instrumenting class:org.apache.spark.deploy.FaultToleranceTest$delayedInit$body
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LogisticRegressionModel.LogisticRegressionModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.Decimal.DecimalAsIfIntegral
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeMax
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.LeftSide
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.optimizer.Optimizer.OptimizeSubqueries
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.stat.StatFunctions.CovarianceCounter
[WARN] Unable to detect inner functions for class:org.apache.spark.annotation.InterfaceStability.Unstable
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RecoverPartitionsContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.MaxAbsScalerModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.SQLConf.ParquetOutputTimestampType
Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.ParquetReadSupport
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.OpenHashSet.Hasher
Error instrumenting class:org.apache.spark.input.StreamBasedRecordReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.executor.Executor.TaskReaper
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.HDFSBackedStateStore.STATE
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.LocalIndexEncoder
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator6$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.Pipeline.SharedReadWrite
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.StandardScalerModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.SQLConf.Replaced
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.StringType.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.InMemoryFileIndex.SerializableFileStatus
[WARN] Unable to detect inner functions for class:org.apache.spark.network.server.RpcHandler.1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.CrossValidator.CrossValidatorWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetMapConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.PCAModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.rules.RuleExecutor.FixedPoint
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterInStandby
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.ProgressReporter.ExecutionStats
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.catalog.InMemoryCatalog.DatabaseDesc
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.ChainedIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillDriverResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.SizeTracker.Sample
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StopWordsRemover.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV1_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.FloatType.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator18$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.python.MLSerDe.SparseVectorPickler
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetLocationsAndStatus
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DereferenceContext
Error instrumenting class:org.apache.spark.sql.execution.datasources.csv.MultiLineCSVDataSource$
Error instrumenting class:org.apache.spark.deploy.security.HBaseDelegationTokenProvider
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveBlock
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.DriverEndpoint
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.kafka010.CachedKafkaConsumer.CacheKey
[WARN] Unable to detect inner functions for class:org.apache.spark.internal.io.FileCommitProtocol.TaskCommitMessage
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetExecutorEndpointRef
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Link
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IntegerLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.GeneralizedLinearRegressionModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.LookupFunctions
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.BooleanAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.streaming.InternalOutputModes.Complete
Error instrumenting class:org.apache.spark.input.StreamFileInputFormat
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator10$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AnalyzeContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.ChiSquareTest.ChiSquareResult
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.TimestampConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowFunctionsContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.DoubleAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StructContext
[WARN] Unable to detect inner functions for class:org.apache.spark.MapOutputTrackerMaster.MessageLoop
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateDatabaseContext
Error instrumenting class:org.apache.spark.ml.image.SamplePathFilter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PredicatedContext
[WARN] Unable to detect inner functions for class:org.apache.spark.network.server.RpcHandler.OneWayRpcCallback
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RelationPrimaryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.rdd.NewHadoopRDD.NewHadoopMapPartitionsWithSplitRDD
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InsertOverwriteDirContext
Error instrumenting class:org.apache.spark.input.FixedLengthBinaryInputFormat
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.scheduler.StreamingListenerBus.WrappedStreamingListenerEvent
Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase$NullIntIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel.BucketedRandomProjectionLSHModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SortItemContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveSubquery
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.attribute.AttributeType.Numeric$2$
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.TreeEnsembleModel.SaveLoadV1_0.EnsembleNodeData
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.Heartbeat
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LinearSVCModel.LinearSVCWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.dstream.ReceiverInputDStream.ReceiverRateController
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.GaussianMixtureModel.SaveLoadV1_0.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Key
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.HashingTF.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.rdd.HadoopRDD.HadoopMapPartitionsWithSplitRDD
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Word2VecModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.graphx.util.BytecodeUtils.MethodInvocationFinder
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.TreeEnsembleModel.SaveLoadV1_0.$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.$$typecreator30$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.BooleanEquality
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.ByteConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.VectorIndexerModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SparkSaslServer.$DigestCallbackHandler
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.2
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinExec.OneSideHashJoiner
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.OpenHashSet.IntHasher
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.HDFSBackedStateStore.ABORTED
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.ByteType.$$typecreator1$1
Error instrumenting class:org.apache.spark.sql.execution.datasources.PartitioningUtils$
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillDriver
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.trees.TreeNodeRef
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.MutableProjection
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveDeserializer
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.UpdateDelegationTokens
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.IDFModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.network.protocol.Encoders.ByteArrays
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDe.SparseMatrixPickler
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PartitionSpecContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.PCAModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.FetchRequest
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.SummarizerBuffer
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.UncacheTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.plans.logical.statsEstimation.EstimationUtils.OverlappedRange
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions.DslSymbol
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.Max
21/05/07 09:37:39 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.StateStoreAwareZipPartitionsRDD
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.StreamingJoinStrategy
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManager.ShuffleMetricsSource
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeansModel.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ComplexDataTypeContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.InMemoryScans
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.FixNullability
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.aggregate.ApproxCountDistinctForIntervals.LongArrayInternalRow
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.ALSWrapper.ALSWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ValueExpressionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.Schema
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QuotedIdentifierAlternativeContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.FunctionArgumentConversion
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator10$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterStateResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.InMemoryFileIndex.SerializableFileStatus
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.StructAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.RFormulaModel.RFormulaModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.EnsembleModelReadWrite.$typecreator6$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ProbabilisticClassificationModel.$$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.Aggregation
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.sources.MemorySinkV2.AddedData
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetMemoryStatus
Error instrumenting class:org.apache.spark.ml.source.libsvm.LibSVMFileFormat
[WARN] Unable to detect inner functions for class:org.apache.spark.util.Benchmark.Case
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.AttributeSeq
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.$$typecreator2$1
Error instrumenting class:org.apache.spark.mllib.tree.model.TreeEnsembleModel$SaveLoadV1_0$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator10$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.HDFSBackedStateStore.UPDATING
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.WindowsContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.CrossValidator.CrossValidatorReader
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.recommendation.MatrixFactorizationModel.SaveLoadV1_0.$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestExecutors
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetLocations
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.UnsignedPrefixComparatorDesc
[WARN] Unable to detect inner functions for class:org.apache.spark.ui.JettyUtils.ServletParams
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveAggAliasInGroupBy
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.QuantileDiscretizer.QuantileDiscretizerWriter
Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$FileTypes
Error instrumenting class:org.apache.spark.deploy.rest.RestSubmissionServer
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LinearSVCModel.LinearSVCWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ProbabilisticClassificationModel.$$typecreator6$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDeBase.BasePickler
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Binarizer.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.KMeansModel.Cluster
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.KMeansModel.Cluster
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinHashLSHModel.MinHashLSHModelWriter.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.SpecialLimits
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeM2n
[WARN] Unable to detect inner functions for class:org.apache.spark.util.Benchmark.Case
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.SQLConf.PartitionOverwriteMode
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LinearSVCWrapper.LinearSVCWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.SparkConf.DeprecatedConfig
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterChanged
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.NaiveBayesWrapper.NaiveBayesWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.RandomForestClassifierWrapper.RandomForestClassifierWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.NaiveBayesModel.NaiveBayesModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.SubmitDriverResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.MaxAbsScalerModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.StringIndexModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.Schema
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.ArrowVectorAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel.MultilayerPerceptronClassificationModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GBTRegressionModel.GBTRegressionModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NamedWindowContext
Error instrumenting class:org.apache.spark.sql.execution.command.CommandUtils$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.IdentityConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CacheTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.kafka010.CachedKafkaConsumer.CacheKey
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisteredExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.Shutdown
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.ExternalShuffleBlockHandler.1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.streaming.InternalOutputModes.Update
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArray.SpillableArrayIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetMapConverter.$KeyValueConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.network.protocol.Encoders.StringArrays
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GaussianMixtureWrapper.GaussianMixtureWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.aggregate.TypedAverage.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.OneHotEncoderModel.OneHotEncoderModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.DecisionTreeClassifierWrapper.DecisionTreeClassifierWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManager.RemoteBlockTempFileManager
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StorageHandlerContext
Error instrumenting class:org.apache.spark.input.Configurable
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.ChiSqSelectorModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.InMemoryStore.1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LDAWrapper.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.DataType.JSortedObject
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.Decimal.DecimalIsFractional
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.PowerIterationClustering.Assignment
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.DecimalType.Expression
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.RatingBlock
Error instrumenting class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.OneForOneBlockFetcher.$ChunkCallback
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DropFunctionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FrameBoundContext
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.SignedPrefixComparator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DescribeDatabaseContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.NodeData
[WARN] Unable to detect inner functions for class:org.apache.spark.SparkConf.AlternateConfig
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.LinearRegressionModel.LinearRegressionModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisteredApplication
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.$typecreator6$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.TrainValidationSplit.TrainValidationSplitWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.UnsignedPrefixComparatorNullsLast
Error instrumenting class:org.apache.spark.sql.execution.datasources.csv.TextInputCSVDataSource$
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.$typecreator8$1
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.ExternalShuffleBlockResolver.$2
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.VertexData
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ExecutorAdded
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateViewContext
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetBlockStatus
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveFunctions
[WARN] Unable to detect inner functions for class:org.apache.spark.shuffle.sort.ShuffleInMemorySorter.SortComparator
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.StatefulAggregationStrategy
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.FPGrowthWrapper.FPGrowthWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableIdentifierContext
Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$FileTypes$
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDe.RatingPickler
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LDAModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.SplitData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetOperationContext
[WARN] Unable to detect inner functions for class:org.apache.spark.network.protocol.MessageDecoder.1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.LocalLDAModel.SaveLoadV1_0.Data$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GaussianMixtureWrapper.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.StateStore.MaintenanceTask
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerLatestState
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DescribeColNameContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.JoinTypeContext
Error instrumenting class:org.apache.spark.sql.execution.datasources.orc.OrcFileFormat
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.DecisionTreeRegressionModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.joins.BuildSide
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRestModel.OneVsRestModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.unsafe.map.BytesToBytesMap.1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ExecutorAdded
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.ALSWrapper.ALSWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestSubmitDriver
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.KMeansModelWriter.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ArithmeticBinaryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.CatalogImpl.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableFileFormatContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.PredictData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.joins.BuildRight
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.InConversion
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ColumnPruner.ColumnPrunerWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ExplainContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV1_0.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.WriteStyle.FlattenStyle
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV1_0.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.GeneralizedLinearRegressionModelWriter.Data
Error instrumenting class:org.apache.spark.ui.JettyUtils$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.UseContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeKVExternalSorter.$KVSorterIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalSorter.SpillableIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherServer.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator6$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SparkSession.implicits
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveOrdinalInOrderByAndGroupBy
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.ChiSqSelectorModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.attribute.AttributeType.Binary$2$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator2$1
Error instrumenting class:org.apache.spark.input.FixedLengthBinaryRecordReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.VectorIndexerModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.Pipeline.PipelineWriter
Error instrumenting class:org.apache.spark.internal.io.HadoopMapRedWriteConfigUtil
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator8$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QueryPrimaryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.StopAppClient
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.PowerIterationClusteringModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.LongAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.CoalesceExec.EmptyPartition
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.NaiveBayesWrapper.NaiveBayesWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Identity
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.QueryExecution.debug
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.debug.DebugExec
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.GroupingSetContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.ShortAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.RatingBlock
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.MetricsAggregate
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.EnsembleModelReadWrite.$typecreator10$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QueryNoWithContext
Error instrumenting class:org.apache.spark.sql.execution.datasources.PartitionPath$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ResourceContext
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetPeers
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.LevelDBTypeInfo.$Index
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.IsotonicRegressionModel.SaveLoadV1_0.Data$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.StructConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ConstantContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.KMeansModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions.DslExpression
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerSchedulerStateResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.ArrayAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Subscript
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveReferences
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.CoalesceExec.EmptyRDDWithPartitions
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.SignedPrefixComparatorDescNullsFirst
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.VectorIndexerModelWriter.Data
Error instrumenting class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$StoreFile$
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ExecutorStateChanged
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PredicateContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.MinMaxScalerModelWriter.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinHashLSHModel.MinHashLSHModelWriter.Data
Error instrumenting class:org.apache.spark.sql.catalyst.parser.ParserUtils$EnhancedLogicalPlan$
[WARN] Unable to detect inner functions for class:org.apache.spark.shuffle.sort.ShuffleInMemorySorter.ShuffleSorterIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalAppendOnlyMap.HashComparator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.ConcatCoercion
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestKillDriver
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestSubmitDriver
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.optimizer.JoinReorderDP.JoinPlan
Error instrumenting class:org.apache.spark.deploy.worker.ui.WorkerWebUI
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.receiver.BlockGenerator.GeneratorState
[WARN] Unable to detect inner functions for class:org.apache.spark.InternalAccumulator.shuffleWrite
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.python.MLSerDe.DenseMatrixPickler
Error instrumenting class:org.apache.spark.metrics.MetricsSystem
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDe.DenseVectorPickler
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.IDFModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.LaunchExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.util.DefaultParamsReader.Metadata
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.GeneralizedLinearRegressionModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0.$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Interaction.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.xml.UDFXPathUtil.ReusableStringReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.AFTSurvivalRegressionModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StringLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeansModel.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.BoundPortsResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions.ImplicitAttribute
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel.BucketedRandomProjectionLSHModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.util.Utils.Lock
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.debug.DebugExec.$ColumnMetrics$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.BisectingKMeansModel.BisectingKMeansModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.BooleanValueContext
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.mesos.MesosExternalShuffleClient.$RegisterDriverCallback
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.streaming.StreamingQueryListener.QueryStartedEvent
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.VertexData
[WARN] Unable to detect inner functions for class:org.apache.spark.util.JsonProtocol.TASK_END_REASON_FORMATTED_CLASS_NAMES
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.FlatMapGroupsWithStateStrategy
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.EltCoercion
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.protocol.BlockTransferMessage.Type
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InsertOverwriteTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ExtractGenerator
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.CompleteRecovery
[WARN] Unable to detect inner functions for class:org.apache.spark.graphx.impl.ShippableVertexPartition.ShippableVertexPartitionOpsConstructor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.$$typecreator22$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterWorker
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.FPGrowthWrapper.FPGrowthWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.StandardScalerModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.EquivalentExpressions.Expr
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.OpenHashMapBasedStateMap.LimitMarker
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SampleByPercentileContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.StringIndexerModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ComplexColTypeListContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.$$typecreator2$1
Error instrumenting class:org.apache.spark.sql.execution.datasources.json.JsonFileFormat
[WARN] Unable to detect inner functions for class:org.apache.spark.status.ElementTrackingStore.Trigger
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveSubqueryColumnAliases
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.FetchResult
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.BinaryType.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Gaussian
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowColumnsContext
[WARN] Unable to detect inner functions for class:org.apache.spark.network.util.LevelDBProvider.StoreVersion
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.UncompressedInBlockSort
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LDA.LDAReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAssembler.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.EquivalentExpressions.Expr
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.VectorIndexerModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Log
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ApplicationFinished
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.ResolveHints.RemoveAllHints
Error instrumenting class:org.apache.spark.sql.execution.streaming.FileStreamSinkLog
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.Word2VecModel.SaveLoadV1_0.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Tweedie
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalAppendOnlyMap.ExternalIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.LogicalNotContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.StandardScalerModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.util.SizeEstimator.ClassInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.LaunchTask
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.RepeatedConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.HasCachedBlocks
[WARN] Unable to detect inner functions for class:org.apache.spark.graphx.PartitionStrategy.RandomVertexCut
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.HintContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.StandardScalerModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.scheduler.StreamingListenerBus.WrappedStreamingListenerEvent
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.PowerIterationClustering.Assignment
[WARN] Unable to detect inner functions for class:org.apache.spark.unsafe.types.UTF8String.IntWrapper
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Word2VecModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterClusterManager
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GBTClassifierWrapper.GBTClassifierWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.network.server.OneForOneStreamManager.StreamState
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.RelationalGroupedDataset.GroupByType
[WARN] Unable to detect inner functions for class:org.apache.spark.unsafe.types.UTF8String.LongWrapper
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FileFormatContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionModel.IsotonicRegressionModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.HasCachedBlocks
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.StreamingRelationStrategy
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.ParserUtils.EnhancedLogicalPlan
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetLocationsMultipleBlockIds
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterWorkerFailed
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.IsotonicRegressionModel.SaveLoadV1_0.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IdentifierListContext
Error instrumenting class:org.apache.spark.mllib.clustering.DistributedLDAModel$SaveLoadV1_0$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.InMemoryFileIndex.SerializableBlockLocation
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyWithIndexToValueStore
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ColTypeListContext
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.ArrayWrappers.ComparableIntArray
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateKeyWatermarkPredicate
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Named
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SortPrefixUtils.NoOpPrefixComparator
Error instrumenting class:org.apache.spark.deploy.security.HadoopFSDelegationTokenProvider
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAttributeRewriter.VectorAttributeRewriterReader
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.CheckForWorkerTimeOut
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetDecimalConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.KVTypeInfo.$MethodAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.ReviveOffers
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.BooleanBitSet.Decoder
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterWorker
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetLongDictionaryAwareDecimalConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.OutputCommitCoordinator.OutputCommitCoordinatorEndpoint
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.TreeEnsembleModel.SaveLoadV1_0.EnsembleNodeData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.DoubleConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.LeastSquaresNESolver
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.MinMaxScalerModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.MetricsAggregate
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.FileBasedWriteAheadLog.LogInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.ExecutorAllocationManager.ExecutorAllocationListener
Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat
Error instrumenting class:org.apache.spark.metrics.sink.MetricsServlet
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TablePropertyListContext
[WARN] Unable to detect inner functions for class:org.apache.spark.util.JsonProtocol.SPARK_LISTENER_EVENT_FORMATTED_CLASS_NAMES
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StopWordsRemover.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.serializer.SerializationDebugger.ListObjectOutputStream
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ProbabilisticClassificationModel.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherProtocol.Message
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestDriverStatus
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator11$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.NumNonZeros
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.NullIntolerant
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.RelationalGroupedDataset.PivotType
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.MinMaxScalerModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.CrossValidatorModel.CrossValidatorModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.DiskBlockObjectWriter.$ManualCloseBufferedOutputStream$1
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.Main.1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.IntAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.VariableLengthRowBasedKeyValueBatch.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.PythonMLLibAPI.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.IntegerType.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.BooleanBitSet.Encoder
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Poisson
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.DecisionTreeRegressionModel.DecisionTreeRegressionModelWriter.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.optim.WeightedLeastSquares.QuasiNewton
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeExternalRowSorter.PrefixComputer.Prefix
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.KMeansWrapper.KMeansWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.StringAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.LinearRegressionModel.LinearRegressionModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.CatalogImpl.$$typecreator3$1
Error instrumenting class:org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$
Error instrumenting class:org.apache.spark.sql.execution.streaming.state.StateStore$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.NaiveBayesModel.NaiveBayesModelWriter
Error instrumenting class:org.apache.spark.sql.execution.streaming.HDFSMetadataLog
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.LaunchExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.RetryingBlockFetcher.1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RelationContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillExecutors
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionModel.IsotonicRegressionModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator9$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRestModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Probit
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SampleMethodContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.SubmitDriverResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.BinaryAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PartitionSpecLocationContext
Error instrumenting class:org.apache.spark.sql.execution.streaming.StreamMetadata$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.IntDelta.Encoder
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LDAModel.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.LocalPrefixSpan.ReversedPrefix
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IdentifierCommentContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.RightSide
Error instrumenting class:org.apache.spark.ui.ServerInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StopWordsRemover.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionModel.IsotonicRegressionModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.FileBasedWriteAheadLog.LogInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveWorker
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IdentifierContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.RFormulaModel.RFormulaModelWriter.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.TriggerThreadDump
[WARN] Unable to detect inner functions for class:org.apache.spark.network.protocol.Encoders.Strings
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.FileStreamSource.FileEntry
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.WindowsSubstitution
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalAppendOnlyMap.DiskMapIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.GradientBoostedTreesModel.SaveLoadV1_0
Error instrumenting class:org.apache.spark.sql.execution.datasources.NoopCache$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.ChiSquareTest.$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeExternalRowSorter.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutorsOnHost
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.protocol.BlockTransferMessage.Decoder
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.CholeskySolver
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper.GeneralizedLinearRegressionWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StopDriver
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.BlockLocationsAndStatus
Error instrumenting class:org.apache.spark.sql.execution.datasources.FileFormatWriter$DynamicPartitionWriteTask
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.JoinSelection
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.PrefixSpan.Postfix
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.BucketSpecContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.DriverStateChanged
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinConditionSplitPredicates
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeKVExternalSorter.1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.IDF.DocumentFrequencyAggregator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DoubleLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.GaussianMixtureModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutorFailed
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette.$typecreator1$1
Error instrumenting class:org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LogisticRegressionModel.LogisticRegressionModelWriter.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.serializer.SerializationDebugger.SerializationDebugger
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.receiver.BlockGenerator.Block
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.CountVectorizerModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.FailureFetchResult
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FunctionCallContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorSlicer.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeM2
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.MemorySink.AddedData
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.ReplicateBlock
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillExecutors
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SubqueryExpressionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassReflection
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.debug.DebugQuery
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ImputerModel.ImputerReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateWatermarkPredicates
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.DecimalType.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveShuffle
[WARN] Unable to detect inner functions for class:org.apache.spark.network.crypto.TransportCipher.EncryptedMessage
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.HashingTF.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.LongDelta.Decoder
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.optimizer.StarSchemaDetection.TableAccessCardinality
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.DirectKafkaRateController
[WARN] Unable to detect inner functions for class:org.apache.spark.unsafe.map.BytesToBytesMap.$Location
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase.IntIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.$$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.BisectingKMeansWrapper.BisectingKMeansWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.RowUpdater
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LocalLDAModel.LocalLDAModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel.MultilayerPerceptronClassificationModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ConstantDefaultContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DescribeTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.DictionaryEncoding.Decoder
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.ArrayWrappers.ComparableByteArray
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.InBlock
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.OldData
[WARN] Unable to detect inner functions for class:org.apache.spark.AccumulatorParam.FloatAccumulatorParam
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.EnsembleModelReadWrite.$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.StorageStatus.RddStorageInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.LateralViewContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAttributeRewriter.VectorAttributeRewriterWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ComplexColTypeContext
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.InMemoryStore.InMemoryView
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.RepeatedPrimitiveConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.ShortType.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Binarizer.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.StandardScalerModelWriter.$$typecreator3$1
Error instrumenting class:org.apache.spark.mllib.regression.IsotonicRegressionModel$SaveLoadV1_0$Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Inverse
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.stat.test.ChiSqTest.Method
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ClearCacheContext
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalSorter.SpilledFile
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.python.EvaluatePython.StructTypePickler
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.IDFModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel.BucketedRandomProjectionLSHModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.OneHotEncoder.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Sqrt
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.DirectKafkaInputDStreamCheckpointData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.ArrayConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.plans.logical.statsEstimation.EstimationUtils.OverlappedRange
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.LastContext
[WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SparkSaslClient.1
Error instrumenting class:org.apache.spark.ml.image.SamplePathFilter$
[WARN] Unable to detect inner functions for class:org.apache.spark.graphx.PartitionStrategy.EdgePartition1D
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.ChiSqSelectorModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.UpdateDelegationTokens
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.history.HistoryServerDiskManager.Lease
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.aggregate.HashMapGenerator.Buffer
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.AddWebUIFilter
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ReconnectWorker
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TablePropertyContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerStateResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.SQLConf.Deprecated
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ResetConfigurationContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator13$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IdentifierSeqContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator8$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel.BucketedRandomProjectionLSHModelWriter.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveRelations
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DropDatabaseContext
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.ProgressReporter.ExecutionStats
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.RunLengthEncoding.Encoder
[WARN] Unable to detect inner functions for class:org.apache.spark.graphx.PartitionStrategy.EdgePartition2D
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.OldData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TinyIntLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.StructConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.TimSort.$SortState
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.IsotonicRegressionWrapper.IsotonicRegressionWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ColumnarBatch.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ClassificationModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetBinaryDictionaryAwareDecimalConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.rules.RuleExecutor.FixedPoint
[WARN] Unable to detect inner functions for class:org.apache.spark.rpc.netty.NettyRpcEnv.FileDownloadChannel
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyWithIndexAndValue
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.BooleanConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.FamilyAndLink
[WARN] Unable to detect inner functions for class:org.apache.spark.util.sketch.CountMinSketch.Version
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.debug.DebugExec.$ColumnMetrics
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.VectorizedRleValuesReader.1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.TrainValidationSplitModel.TrainValidationSplitModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.PrefixSpan.Postfix
Error instrumenting class:org.apache.spark.sql.catalyst.util.CompressionCodecs$
[WARN] Unable to detect inner functions for class:org.apache.spark.executor.Executor.TaskRunner
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase.RLEIntIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateTableHeaderContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.streaming.StreamingQueryListener.QueryProgressEvent
[WARN] Unable to detect inner functions for class:org.apache.spark.rdd.HadoopRDD.HadoopMapPartitionsWithSplitRDD
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.WindowSpecContext
[WARN] Unable to detect inner functions for class:org.apache.spark.io.ReadAheadInputStream.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassMethods
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.PCAModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.DoublePrefixComparator
Error instrumenting class:org.apache.spark.sql.execution.datasources.orc.OrcColumnarBatchReader
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ExecutorUpdated
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.LoadDataContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.WriteStyle.QuotedStyle
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.optim.WeightedLeastSquares.Auto
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.LevelDB.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.LinearRegressionModel.LinearRegressionModelWriter.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeNNZ
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.DateType.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.AFTSurvivalRegressionWrapper.AFTSurvivalRegressionWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.WhenClauseContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisteredWorker
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.MaxAbsScalerModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.Heartbeat
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ChangeColumnContext
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.SparkSubmitCommandBuilder.$OptionParser
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator25$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ManageResourceContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ExecutorAllocationManager.ExecutorAllocationManagerSource
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IntervalLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveBroadcast
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.TreeEnsembleModel.SaveLoadV1_0.Metadata$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IntervalFieldContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ArithmeticOperatorContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.MultiInsertQueryBodyContext
[WARN] Unable to detect inner functions for class:org.apache.spark.network.crypto.TransportCipher.DecryptionHandler
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveBlock
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions.DslAttribute
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.ExternalShuffleBlockResolver.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.FileStreamSource.SeenFilesMap
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RowFormatSerdeContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.debug.DebugExec.$SetAccumulator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.JoinCriteriaContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ValueExpressionDefaultContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LDAWrapper.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.MultilayerPerceptronClassifierWrapper.MultilayerPerceptronClassifierWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator19$1
Error instrumenting class:org.apache.spark.mllib.tree.model.TreeEnsembleModel$SaveLoadV1_0$Metadata
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.UnregisterApplication
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Link
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleExpressionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.LevelDBTypeInfo.1
[WARN] Unable to detect inner functions for class:org.apache.spark.SparkConf.AlternateConfig
Error instrumenting class:org.apache.spark.sql.execution.streaming.FileStreamSourceLog
[WARN] Unable to detect inner functions for class:org.apache.spark.util.random.StratifiedSamplingUtils.RandomDataGenerator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.LongType.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.NaiveBayesModel.NaiveBayesModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.api.python.SerDeUtil.AutoBatchedPickler
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.AFTSurvivalRegressionModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.param.shared.SharedParamsCodeGen.ParamDesc
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.CountVectorizerModelWriter.$$typecreator3$1
Error instrumenting class:org.apache.spark.ui.ServerInfo$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.IsotonicRegressionWrapper.IsotonicRegressionWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.$$typecreator38$1
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.GetExecutorLossReason
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.StorageStatus.NonRddStorageInfo
Error instrumenting class:org.apache.spark.sql.execution.streaming.FileStreamSink$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeMean
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRestModel.$$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.util.JsonProtocol.JOB_RESULT_FORMATTED_CLASS_NAMES
Error instrumenting class:org.apache.spark.ui.WebUI
[WARN] Unable to detect inner functions for class:org.apache.spark.status.KVUtils.KVStoreScalaSerializer
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.NormL1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PrimaryExpressionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.IdentityConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ArithmeticUnaryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.CLogLog
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCheckResult.TypeCheckSuccess
Error instrumenting class:org.apache.spark.sql.execution.streaming.CompactibleFileStreamLog
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette.ClusterStats
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexer.CategoryStats
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.FPGrowthModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalSorter.SpilledFile
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase.ValuesReaderIntIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.PCAModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SampleByBucketContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SearchedCaseContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetConfigurationContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.RevokedLeadership
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.RFormulaModel.RFormulaModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.BatchedWriteAheadLog.Record
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ColPositionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.util.QuantileSummaries.Stats
[WARN] Unable to detect inner functions for class:org.apache.spark.graphx.lib.SVDPlusPlus.Conf
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GaussianMixtureWrapper.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.RetryingBlockFetcher.$RetryingBlockFetchListener
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.DoubleType.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SampleByRowsContext
[WARN] Unable to detect inner functions for class:org.apache.spark.shuffle.sort.UnsafeShuffleWriter.$CloseAndFlushShieldOutputStream
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.orc.OrcDeserializer.CatalystDataUpdater
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NonReservedContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.ElectedLeader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ExistsContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.UncompressedInBlock
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.CommandBuilderUtils.JavaVendor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GBTRegressorWrapper.GBTRegressorWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RenameTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Interaction.$$typecreator2$1
Error instrumenting class:org.apache.spark.executor.ExecutorSource
Error instrumenting class:org.apache.spark.sql.execution.datasources.FileFormatWriter$
[WARN] Unable to detect inner functions for class:org.apache.spark.TestUtils.JavaSourceFromString
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.DistributedLDAModel.DistributedWriter
Error instrumenting class:org.apache.spark.sql.execution.datasources.json.MultiLineJsonDataSource$
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NullLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TruncateTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.AccumulatorParam.DoubleAccumulatorParam
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StarContext
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RequestExecutors
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.ChiSqSelectorModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowPartitionsContext
[WARN] Unable to detect inner functions for class:org.apache.spark.AccumulatorParam.LongAccumulatorParam
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.attribute.AttributeType.Nominal$2$
[WARN] Unable to detect inner functions for class:org.apache.spark.api.python.BasePythonRunner.ReaderIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.OutputCommitCoordinator.StageState
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestKillDriver
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SparkSession.Builder
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.python.EvaluatePython.RowPickler
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.TimSort.1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayes.$$typecreator9$1
Error instrumenting class:org.apache.spark.input.StreamRecordReader
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator6$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeExternalRowSorter.RowComparator
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeans.ClusterSummary
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.PassThrough.Decoder
Error instrumenting class:org.apache.spark.sql.execution.datasources.SQLHadoopMapReduceCommitProtocol
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.BasicOperators
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.DistributedLDAModel.DistributedLDAModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.LaunchTask
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.BoundPortsRequest
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.v2.PushDownOperatorsToDataSource.FilterAndProject
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.DataSource.SourceInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.AbstractLauncher.ArgumentValidator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SimpleCaseContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.EnsembleModelReadWrite.$typecreator11$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveWindowFrame
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NestedConstantListContext
Error instrumenting class:org.apache.spark.api.python.JavaToWritableConverter
Error instrumenting class:org.apache.spark.sql.execution.datasources.PartitionDirectory$
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterClusterManager
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Family
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QueryOrganizationContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.FamilyAndLink
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkDirCleanup
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.NaiveBayesModel.NaiveBayesModelWriter.Data
Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.$SpillableIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalAppendOnlyMap.ExternalIterator.$StreamBuffer
[WARN] Unable to detect inner functions for class:org.apache.spark.api.python.BasePythonRunner.WriterThread
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.MaxAbsScalerModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.rpc.netty.RpcEndpointVerifier.CheckExistence
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.PipelineModel.PipelineModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.mesos.MesosExternalShuffleClient.$Heartbeater
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel.MultilayerPerceptronClassificationModelWriter.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRest.OneVsRestReader
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.ExternalShuffleBlockHandler.$ManagedBufferIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ProbabilisticClassificationModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator7$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.CatalogImpl.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.NaiveBayesModel.NaiveBayesModelWriter.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.NNLSSolver
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeans.ClusterSummary
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.DecisionTreeRegressionModel.DecisionTreeRegressionModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.network.util.NettyUtils.1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateTableLikeContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.python.MLSerDe.DenseVectorPickler
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ColumnPruner.ColumnPrunerWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.InMemoryStore.InMemoryIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.api.python.SerDeUtil.ByteArrayConstructor
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.LocalLDAModel.SaveLoadV1_0.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinSide
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.DictionaryEncoding.Encoder
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.catalog.InMemoryCatalog.TableDesc
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.dstream.FileInputDStream.FileInputDStreamCheckpointData
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinHashLSHModel.MinHashLSHModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SkewSpecContext
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.GetExecutorLossReason
Error instrumenting class:org.apache.spark.mllib.clustering.GaussianMixtureModel$SaveLoadV1_0$Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.aggregate.ApproxCountDistinctForIntervals.LongArrayInternalRow
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.BooleanDefaultContext
Error instrumenting class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$StoreFile
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.Projection
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.RandomForestRegressorWrapper.RandomForestRegressorWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.AccumulatorParam.IntAccumulatorParam
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.LinearRegressionModel.LinearRegressionModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.Word2VecModel.SaveLoadV1_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherBackend.BackendConnection
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetIntDictionaryAwareDecimalConverter
Error instrumenting class:org.apache.spark.mllib.clustering.DistributedLDAModel$SaveLoadV1_0$EdgeData
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.PrefixSpan.Prefix
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.BooleanType.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetExecutorEndpointRef
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.HintStatementContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LinearSVCWrapper.LinearSVCWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.rdd.JdbcRDD.ConnectionFactory
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.OrderedIdentifierListContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeKVExternalSorter.KVComparator
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAssembler.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.annotation.InterfaceStability.Stable
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateValueWatermarkPredicate
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveWindowOrder
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.BlacklistTracker.ExecutorFailureList.TaskId
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.TreeEnsembleModel.SaveLoadV1_0.$typecreator1$1
Error instrumenting class:org.apache.spark.sql.execution.datasources.json.TextInputJsonDataSource$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.optim.WeightedLeastSquares.Solver
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TablePropertyValueContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NamedExpressionSeqContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator11$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FunctionIdentifierContext
[WARN] Unable to detect inner functions for class:org.apache.spark.network.util.LevelDBProvider.1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.AddWebUIFilter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AddTableColumnsContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleDataTypeContext
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutorFailed
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.joins.BuildLeft
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Wildcard
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetTablePropertiesContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator12$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCheckResult.TypeCheckFailure
[WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SaslEncryption.EncryptionHandler
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionBase.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorSizeHint.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.FailureFetchResult
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.WindowFrameCoercion
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveAliases
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.SignedPrefixComparatorNullsLast
Error instrumenting class:org.apache.spark.sql.execution.datasources.InMemoryFileIndex$
[WARN] Unable to detect inner functions for class:org.apache.spark.serializer.SerializationDebugger.ListObjectOutput
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherServer.$ServerConnection
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RowFormatContext
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetLocationsAndStatus
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.FPTree.Node
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions.DslString
Error instrumenting class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore
Error instrumenting class:org.apache.spark.api.python.TestOutputValueConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.RadixSortSupport
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator6$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.PartitioningUtils.PartitionValues
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterStateResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ExtractGenerator.AliasedGenerator$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.PipelineModel.PipelineModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.AFTSurvivalRegressionWrapper.AFTSurvivalRegressionWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV1_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator15$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StatementDefaultContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IndexToString.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveGenerate
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.FlatMapGroupsWithStateExec.StateStoreUpdater
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ReconnectWorker
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Binomial
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArray.ExternalAppendOnlyUnsafeRowArrayIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.BlockLocationsAndStatus
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QualifiedNameContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions.StringToAttributeConversionHelper
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.MinMaxScalerModelWriter.Data
Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.PullOutNondeterministic
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.DriverStateChanged
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.JoinRelationContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FirstContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.KMeansModelReader.$$typecreator6$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InsertIntoTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetTableLocationContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.LogicalBinaryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.ByteAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.FileFormatWriter.WriteTaskResult
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.rules.RuleExecutor.Batch
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetStorageStatus
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.rdd.NewHadoopRDD.NewHadoopMapPartitionsWithSplitRDD
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.UnsafeInMemorySorter.1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.StateStoreOps
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.util.QuantileSummaries.Stats
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.$typecreator7$1
[WARN] Unable to detect inner functions for class:org.apache.spark.AccumulatorParam.StringAccumulatorParam
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorSizeHint.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.EdgeData$
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDe.DenseMatrixPickler
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SubscriptContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.api.r.SQLUtils.RegexContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterChangeAcknowledged
[WARN] Unable to detect inner functions for class:org.apache.spark.api.python.PythonWorkerFactory.MonitorThread
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.ResolveTableValuedFunctions.ArgumentList
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FunctionTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QueryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinHashLSHModel.MinHashLSHModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerStateResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionModel.IsotonicRegressionModelWriter.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.IsotonicRegressionModel.SaveLoadV1_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.DiskBlockObjectWriter.ManualCloseOutputStream
Error instrumenting class:org.apache.spark.streaming.StreamingContext$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.FeatureHasher.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeWeightSum
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterWorkerResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.DecimalAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0.Data$
Error instrumenting class:org.apache.spark.sql.catalyst.catalog.InMemoryCatalog$
Error instrumenting class:org.apache.spark.streaming.api.java.JavaStreamingContext$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinConditionSplitPredicates
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.recommendation.MatrixFactorizationModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.DecimalConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SaslEncryption.DecryptionHandler
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.InMemoryStore.InstanceList
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.Metric
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.FileFormatWriter.OutputSpec
[WARN] Unable to detect inner functions for class:org.apache.spark.serializer.SerializationDebugger.NullOutputStream
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillDriverResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.codegen.CodegenContext.MutableStateArrays
[WARN] Unable to detect inner functions for class:org.apache.spark.rdd.PipedRDD.NotEqualsFileNameFilter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableNameContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.codegen.DumpByteCode
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DropTablePartitionsContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.Variance
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.kafka010.KafkaRDD.KafkaRDDIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.SparkSubmitUtils.MavenCoordinate
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateTempViewUsingContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.BeginRecovery
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.PythonMLLibAPI.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.rpc.netty.NettyRpcEnv.FileDownloadCallback
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.StringConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.$$typecreator30$1
Error instrumenting class:org.apache.spark.sql.execution.datasources.csv.CSVFileFormat
[WARN] Unable to detect inner functions for class:org.apache.spark.util.Benchmark.Result
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateWatermarkPredicates
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.AFTSurvivalRegressionModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.KVTypeInfo.$FieldAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Power
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Word2VecModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.$$typecreator14$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.GBTClassificationModel.GBTClassificationModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAttributeRewriter.VectorAttributeRewriterWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.serializer.DummySerializerInstance.$1
Error instrumenting class:org.apache.spark.input.ConfigurableCombineFileRecordReader
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.FetchRequest
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.FPTree.Summary
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateFileFormatContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAttributeRewriter.VectorAttributeRewriterWriter.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.scheduler.JobScheduler.JobHandler
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Named
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.$$typecreator38$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.RandomForestRegressionModel.RandomForestRegressionModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.HandleNullInputsForUDF
[WARN] Unable to detect inner functions for class:org.apache.spark.network.protocol.Message.Type
[WARN] Unable to detect inner functions for class:org.apache.spark.graphx.impl.VertexPartition.VertexPartitionOpsConstructor
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.FileFormatWriter.WriteTaskResult
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.OneForOneBlockFetcher.$DownloadCallback
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.GeneralizedLinearRegressionModelWriter.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.internal.io.FileCommitProtocol.EmptyTaskCommitMessage
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.LevelDB.TypeAliases
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.DecisionTreeRegressionModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.FileFormatWriter.ExecuteWriteTask
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeMetric
Error instrumenting class:org.apache.spark.sql.execution.streaming.SinkFileStatus$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetArrayConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.ChiSqSelectorModelWriter.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinHashLSHModel.MinHashLSHModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.StackCoercion
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ColumnPruner.ColumnPrunerWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.api.python.BasePythonRunner.MonitorThread
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.VectorIndexerModelWriter.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.PowerIterationClusteringModel.SaveLoadV1_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.GlobalAggregates
Error instrumenting class:org.apache.spark.serializer.SerializationDebugger$ObjectStreamClassMethods$
[WARN] Unable to detect inner functions for class:org.apache.spark.util.SizeEstimator.SearchState
[WARN] Unable to detect inner functions for class:org.apache.spark.InternalAccumulator.input
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.ExternalShuffleBlockHandler.$ShuffleMetrics
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateWatermarkPredicate
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LinearSVCModel.LinearSVCReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.orc.OrcDeserializer.RowUpdater
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.FileFormatWriter.WriteJobDescription
Error instrumenting class:org.apache.spark.status.api.v1.ApiRootResource$
[WARN] Unable to detect inner functions for class:org.apache.spark.InternalAccumulator.shuffleRead
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.WindowFrameContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ExecutorUpdated
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.UDTConverter
Error instrumenting class:org.apache.spark.mllib.clustering.LocalLDAModel$SaveLoadV1_0$Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.RandomForestRegressorWrapper.RandomForestRegressorWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InlineTableDefault1Context
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.aggregate.HashMapGenerator.Buffer
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Word2VecModelWriter.$$typecreator10$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.TimestampType.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LogisticRegressionModel.LogisticRegressionModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyAndNumValues
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.EnsembleModelReadWrite.EnsembleNodeData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveAggregateFunctions
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InlineTableDefault2Context
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutors
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.DecisionTreeClassifierWrapper.DecisionTreeClassifierWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ReregisterWithMaster
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.TimestampAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.UnsafeInMemorySorter.SortComparator
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.Rating
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.receiver.ReceiverSupervisor.ReceiverState
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RenameTablePartitionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.security.CryptoStreamUtils.CryptoParams
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherProtocol.Stop
[WARN] Unable to detect inner functions for class:org.apache.spark.util.sketch.BloomFilter.Version
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NumberContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.VectorizedRleValuesReader.MODE
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.KeyWrapper
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.LongConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AlterViewQueryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.stat.test.ChiSqTest.NullHypothesis
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DescribeFuncNameContext
Error instrumenting class:org.apache.spark.SparkContext$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GaussianMixtureWrapper.GaussianMixtureWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.NormalEquation
Error instrumenting class:org.apache.spark.sql.execution.datasources.CodecStreams$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLContext.implicits
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.AFTSurvivalRegressionModelWriter.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.OpenHashMapBasedStateMap.StateInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleStatementContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.BooleanLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.UncompressedInBlockBuilder
[WARN] Unable to detect inner functions for class:org.apache.spark.status.ElementTrackingStore.Trigger
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisteredApplication
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.BlacklistTracker.ExecutorFailureList.TaskId
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRest.OneVsRestWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.aggregate.ApproximatePercentile.PercentileDigestSerializer
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RefreshResourceContext
[WARN] Unable to detect inner functions for class:org.apache.spark.unsafe.map.HashMapGrowthStrategy.Doubling
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LocalLDAModel.LocalLDAModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.$$typecreator6$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.impl.RandomForest.NodeIndexInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.OneHotEncoderModel.OneHotEncoderModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0.$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleFunctionIdentifierContext
[WARN] Unable to detect inner functions for class:org.apache.spark.status.KVUtils.MetadataMismatchException
[WARN] Unable to detect inner functions for class:org.apache.spark.unsafe.map.BytesToBytesMap.$MapIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.optim.QuasiNewtonSolver.NormalEquationCostFun
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.Mean
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowTablesContext
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetLocationsMultipleBlockIds
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.CountVectorizerModelWriter.Data
Error instrumenting class:org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RequestExecutors
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.BeginRecovery
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ApplicationRemoved
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ColumnPruner.ColumnPrunerWriter.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.StateStoreHandler
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.DecisionTreeClassificationModel.DecisionTreeClassificationModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.DecisionTreeModelReadWrite.$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerSchedulerStateResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.OpenHashSet.LongHasher
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestMasterState
Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport
Error instrumenting class:org.apache.spark.ui.SparkUI
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveWorker
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.aggregate.DeclarativeAggregate.RichAttribute
Error instrumenting class:org.apache.spark.sql.catalyst.catalog.ExternalCatalogUtils$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.SizeTracker.Sample
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ConstantListContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillTask
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherProtocol.SetAppId
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.ToBlockManagerMaster
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRestModel.OneVsRestModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeL1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.ConcatCoercion
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ColumnPruner.ColumnPrunerReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveUpCast
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolvePivot
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArray.InMemoryBufferIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.stat.FrequentItems.FreqItemCounter
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.StringPrefixComparator
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.BatchedWriteAheadLog.Record
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.Word2VecModel.SaveLoadV1_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.GenericFileFormatContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetTableSerDeContext
[WARN] Unable to detect inner functions for class:org.apache.spark.shuffle.sort.UnsafeShuffleWriter.MyByteArrayOutputStream
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.WindowRefContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowTblPropertiesContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LocalLDAModel.LocalLDAModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.UDTConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.ShortConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.IfCoercion
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetStringConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.streaming.StreamingQueryListener.Event
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.FPGrowth.FreqItemset
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.RandomForestClassificationModel.RandomForestClassificationModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterApplication
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.NormL2
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeMin
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.history.AppListingListener.MutableApplicationInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StatementContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AliasedQueryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.LaunchDriver
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.Decimal.DecimalIsConflicted
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.GaussianMixtureModelWriter.$$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.IDFModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManager.RemoteBlockTempFileManager.$ReferenceWithCleanup
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SaslEncryption.EncryptedMessage
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.OutputCommitCoordinator.StageState
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.ExternalShuffleBlockResolver.AppExecId
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetBlockStatus
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SampleContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator14$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PositionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.IntConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DecimalLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDe.SparseVectorPickler
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Word2VecModelReader.$$typecreator17$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.attribute.AttributeType.Unresolved$2$
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.GaussianMixtureModel.SaveLoadV1_0.Data$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.FileStreamSource.FileEntry
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.BigDecimalLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.rpc.netty.Dispatcher.MessageLoop
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetPeers
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Gamma
Error instrumenting class:org.apache.spark.input.WholeTextFileRecordReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.RatingBlockBuilder
Error instrumenting class:org.apache.spark.internal.io.HadoopMapReduceCommitProtocol
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.StringToColumn
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveBroadcast
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.PredictData
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LocalLDAModel.LocalLDAModelWriter.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.MinMaxScalerModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RowFormatDelimitedContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherProtocol.SetState
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.LocalPrefixSpan.ReversedPrefix
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.OneHotEncoderModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.BoundPortsResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator11$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.GaussianMixtureModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.RandomForestModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.FloatAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ExecutorStateChanged
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.LinearRegressionModel.LinearRegressionModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalAppendOnlyMap.SpillableIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyWithIndexToValueType
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.ReplicateBlock
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator15$1
[WARN] Unable to detect inner functions for class:org.apache.spark.network.server.TransportRequestHandler.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.PrefixSpanModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ApplicationFinished
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GBTClassifierWrapper.GBTClassifierWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PrimitiveDataTypeContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.DecimalType.Fixed
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DescribeFunctionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.StorageStatus.RddStorageInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.LogisticRegressionWithLBFGS.$$typecreator1$1
Error instrumenting class:org.apache.spark.sql.execution.datasources.text.TextFileFormat
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowCreateTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Word2VecModelWriter.$$typecreator9$1
Created : .generated-mima-class-excludes in current directory.
Created : .generated-mima-member-excludes in current directory.
Using /usr/java/latest as default JAVA_HOME.
Note, this will be overridden by -java-home if it is set.
[info] Loading project definition from /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/project
[info] Set current project to spark-parent (in build file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/)
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}tools...
[info] spark-parent: previous-artifact not set, not analyzing binary compatibility
[info] spark-tags: previous-artifact not set, not analyzing binary compatibility
[info] spark-kvstore: previous-artifact not set, not analyzing binary compatibility
[info] spark-streaming-flume-sink: previous-artifact not set, not analyzing binary compatibility
[info] spark-network-common: previous-artifact not set, not analyzing binary compatibility
[info] Done updating.
[info] spark-tools: previous-artifact not set, not analyzing binary compatibility
[info] spark-unsafe: previous-artifact not set, not analyzing binary compatibility
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scala-lang/scala-library/2.11.8/scala-library-2.11.8.jar ...
[info] 	[SUCCESSFUL ] org.scala-lang#scala-library;2.11.8!scala-library.jar (1154ms)
[info] spark-sketch: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-sketch_2.11:2.3.0  (filtered 1)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scala-lang/scala-reflect/2.11.8/scala-reflect-2.11.8.jar ...
[info] 	[SUCCESSFUL ] org.scala-lang#scala-reflect;2.11.8!scala-reflect.jar (697ms)
[info] spark-network-shuffle: previous-artifact not set, not analyzing binary compatibility
[info] spark-network-yarn: previous-artifact not set, not analyzing binary compatibility
[info] spark-launcher: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-launcher_2.11:2.3.0  (filtered 1)
[info] spark-mllib-local: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-mllib-local_2.11:2.3.0  (filtered 1)
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/SSLOptions.scala:71: constructor SslContextFactory in class SslContextFactory is deprecated: see corresponding Javadoc for more information.
[warn]       val sslContextFactory = new SslContextFactory()
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:161: method isRunningLocally in class TaskContext is deprecated: Local execution was removed, so this always returns false
[warn]   override def isRunningLocally(): Boolean = taskContext.isRunningLocally()
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/scheduler/StageInfo.scala:59: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]   def attemptNumber(): Int = attemptId
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/api/r/RBackend.scala:102: method childGroup in class ServerBootstrap is deprecated: see corresponding Javadoc for more information.
[warn]     if (bootstrap != null && bootstrap.childGroup() != null) {
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/spark-core_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/apache/avro/avro/1.7.7/avro-1.7.7.jar ...
[info] 	[SUCCESSFUL ] org.apache.avro#avro;1.7.7!avro.jar (194ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scala-lang/scalap/2.11.0/scalap-2.11.0.jar ...
[info] 	[SUCCESSFUL ] org.scala-lang#scalap;2.11.0!scalap.jar (414ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scala-lang/scala-compiler/2.11.0/scala-compiler-2.11.0.jar ...
[info] 	[SUCCESSFUL ] org.scala-lang#scala-compiler;2.11.0!scala-compiler.jar (992ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scala-lang/modules/scala-xml_2.11/1.0.1/scala-xml_2.11-1.0.1.jar ...
[info] 	[SUCCESSFUL ] org.scala-lang.modules#scala-xml_2.11;1.0.1!scala-xml_2.11.jar(bundle) (429ms)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/scala-lang/modules/scala-parser-combinators_2.11/1.0.1/scala-parser-combinators_2.11-1.0.1.jar ...
[info] 	[SUCCESSFUL ] org.scala-lang.modules#scala-parser-combinators_2.11;1.0.1!scala-parser-combinators_2.11.jar(bundle) (373ms)
[info] spark-ganglia-lgpl: previous-artifact not set, not analyzing binary compatibility
[info] spark-kubernetes: previous-artifact not set, not analyzing binary compatibility
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] spark-yarn: previous-artifact not set, not analyzing binary compatibility
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:87: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       fwInfoBuilder.setRole(role)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:224: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]     role.foreach { r => builder.setRole(r) }
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:260: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]             Option(r.getRole), reservation)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:263: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]             Option(r.getRole), reservation)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:521: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       role.foreach { r => builder.setRole(r) }
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:537: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]         (RoleResourceInfo(resource.getRole, reservation),
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] spark-mesos: previous-artifact not set, not analyzing binary compatibility
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] spark-catalyst: previous-artifact not set, not analyzing binary compatibility
[info] spark-streaming: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-streaming_2.11:2.3.0  (filtered 3)
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/DirectKafkaInputDStream.scala:172: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]     val msgs = c.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:100: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:153: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaDataConsumer.scala:200: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     val p = consumer.poll(timeout)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:259: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val dstream = FlumeUtils.createStream(jssc, hostname, port, storageLevel, enableDecompression)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:275: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val dstream = FlumeUtils.createPollingStream(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createPollingStream(ssc, host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createPollingStream(ssc, host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumeEventCount.scala:59: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createStream(ssc, host, port, StorageLevel.MEMORY_ONLY_SER_2)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:633: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder](
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:647: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:648: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker]): JavaRDD[(Array[Byte], Array[Byte])] = {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:657: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:658: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker]): JavaRDD[Array[Byte]] = {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:670: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:671: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:673: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createRDD[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V](
[warn] 
[info] spark-streaming-flume: previous-artifact not set, not analyzing binary compatibility
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:676: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges.toArray(new Array[OffsetRange](offsetRanges.size())),
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:720: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val kc = new KafkaCluster(Map(kafkaParams.asScala.toSeq: _*))
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:721: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       KafkaUtils.getFromOffsets(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:725: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createDirectStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V](
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def createBroker(host: String, port: JInt): Broker = Broker(host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: object Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def createBroker(host: String, port: JInt): Broker = Broker(host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:740: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def offsetRangesOfKafkaRDD(rdd: RDD[_]): JList[OffsetRange] = {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:89: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   protected val kc = new KafkaCluster(kafkaParams)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:172: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       OffsetRange(tp.topic, tp.partition, fo, uo.offset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:217: object KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val leaders = KafkaCluster.checkErrors(kc.findLeaders(topics))
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:222: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]            context.sparkContext, kafkaParams, b.map(OffsetRange(_)), leaders, messageHandler)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:58: trait HasOffsetRanges in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   ) extends RDD[R](sc, Nil) with Logging with HasOffsetRanges {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val offsetRanges: Array[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val offsetRanges: Array[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:152: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val kc = new KafkaCluster(kafkaParams)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:268: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]         OffsetRange(tp.topic, tp.partition, fo, uo.offset)
[warn] 
[info] spark-streaming-kafka-0-8: previous-artifact not set, not analyzing binary compatibility
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:140: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]   private val client = new AmazonKinesisClient(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:150: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]   client.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:219: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information.
[warn]     getRecordsRequest.setRequestCredentials(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:240: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information.
[warn]     getShardIteratorRequest.setRequestCredentials(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:606: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]       KinesisUtils.createStream(jssc.ssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:613: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]         KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:616: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]         KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:107: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val kinesisClient = new AmazonKinesisClient(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:108: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     kinesisClient.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:223: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val kinesisClient = new AmazonKinesisClient(new DefaultAWSCredentialsProviderChain())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:224: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     kinesisClient.setEndpoint(endpoint)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/SparkAWSCredentials.scala:76: method withLongLivedCredentialsProvider in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       .withLongLivedCredentialsProvider(longLivedCreds.provider)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:58: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val client = new AmazonKinesisClient(KinesisTestUtils.getAWSCredentials())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:59: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     client.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:64: constructor AmazonDynamoDBClient in class AmazonDynamoDBClient is deprecated: see corresponding Javadoc for more information.
[warn]     val dynamoDBClient = new AmazonDynamoDBClient(new DefaultAWSCredentialsProviderChain())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:65: method setRegion in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     dynamoDBClient.setRegion(RegionUtils.getRegion(regionName))
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisReceiver.scala:187: constructor Worker in class Worker is deprecated: see corresponding Javadoc for more information.
[warn]     worker = new Worker(recordProcessorFactory, kinesisClientLibConfiguration)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] spark-streaming-kinesis-asl: previous-artifact not set, not analyzing binary compatibility
[info] spark-graphx: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-graphx_2.11:2.3.0  (filtered 3)
[info] downloading https://maven-central.storage-download.googleapis.com/maven2/org/slf4j/slf4j-api/1.7.21/slf4j-api-1.7.21.jar ...
[info] 	[SUCCESSFUL ] org.slf4j#slf4j-api;1.7.21!slf4j-api.jar (265ms)
[info] spark-streaming-kafka-0-10: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-streaming-kafka-0-10_2.11:2.3.0  (filtered 6)
[info] spark-streaming-kafka-0-8-assembly: previous-artifact not set, not analyzing binary compatibility
[info] spark-streaming-flume-assembly: previous-artifact not set, not analyzing binary compatibility
[info] spark-streaming-kinesis-asl-assembly: previous-artifact not set, not analyzing binary compatibility
[info] spark-streaming-kafka-0-10-assembly: previous-artifact not set, not analyzing binary compatibility
[info] spark-core: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-core_2.11:2.3.0  (filtered 909)
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:128: value ENABLE_JOB_SUMMARY in object ParquetOutputFormat is deprecated: see corresponding Javadoc for more information.
[warn]       && conf.get(ParquetOutputFormat.ENABLE_JOB_SUMMARY) == null) {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:360: class ParquetInputSplit in package hadoop is deprecated: see corresponding Javadoc for more information.
[warn]         new org.apache.parquet.hadoop.ParquetInputSplit(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:371: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]         ParquetFileReader.readFooter(sharedConf, filePath, SKIP_ROW_GROUPS).getFileMetaData
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:544: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]           ParquetFileReader.readFooter(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[info] spark-avro: previous-artifact not set, not analyzing binary compatibility
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:119: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]     consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:139: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:187: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]       consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:217: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]       consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:293: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]           consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaDataConsumer.scala:483: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     val p = consumer.poll(pollTimeoutMs)
[warn] 
[info] spark-sql-kafka-0-10: previous-artifact not set, not analyzing binary compatibility
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:138: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn] object OneHotEncoder extends DefaultParamsReadable[OneHotEncoder] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:141: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]   override def load(path: String): OneHotEncoder = super.load(path)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] spark-hive: previous-artifact not set, not analyzing binary compatibility
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:53: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path
[warn]       if (addedClasspath != "") {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:54: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path
[warn]         settings.classpath append addedClasspath
[warn] 
[info] spark-repl: previous-artifact not set, not analyzing binary compatibility
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] spark-hive-thriftserver: previous-artifact not set, not analyzing binary compatibility
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/assembly/target/scala-2.11/spark-assembly_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[info] spark-assembly: previous-artifact not set, not analyzing binary compatibility
[info] Compiling 1 Scala source to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/examples/target/scala-2.11/classes...
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/examples/target/scala-2.11/spark-examples_2.11-2.4.9-SNAPSHOT.jar ...
[info] spark-sql: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-sql_2.11:2.3.0  (filtered 294)
[info] spark-mllib: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-mllib_2.11:2.3.0  (filtered 514)
[info] Done packaging.
[info] spark-examples: previous-artifact not set, not analyzing binary compatibility
[success] Total time: 40 s, completed May 7, 2021 9:38:35 AM
[info] Building Spark assembly (w/Hive 1.2.1) using SBT with these arguments:  -Phadoop-2.6 -Pkubernetes -Phive-thriftserver -Pflume -Pkinesis-asl -Pyarn -Pkafka-0-8 -Pspark-ganglia-lgpl -Phive -Pmesos assembly/package
Using /usr/java/latest as default JAVA_HOME.
Note, this will be overridden by -java-home if it is set.
[info] Loading project definition from /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/project
[info] Set current project to spark-parent (in build file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/)
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/SSLOptions.scala:71: constructor SslContextFactory in class SslContextFactory is deprecated: see corresponding Javadoc for more information.
[warn]       val sslContextFactory = new SslContextFactory()
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:161: method isRunningLocally in class TaskContext is deprecated: Local execution was removed, so this always returns false
[warn]   override def isRunningLocally(): Boolean = taskContext.isRunningLocally()
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/scheduler/StageInfo.scala:59: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]   def attemptNumber(): Int = attemptId
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/api/r/RBackend.scala:102: method childGroup in class ServerBootstrap is deprecated: see corresponding Javadoc for more information.
[warn]     if (bootstrap != null && bootstrap.childGroup() != null) {
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/spark-core_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:87: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       fwInfoBuilder.setRole(role)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:224: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]     role.foreach { r => builder.setRole(r) }
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:260: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]             Option(r.getRole), reservation)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:263: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]             Option(r.getRole), reservation)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:521: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       role.foreach { r => builder.setRole(r) }
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:537: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]         (RoleResourceInfo(resource.getRole, reservation),
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:128: value ENABLE_JOB_SUMMARY in object ParquetOutputFormat is deprecated: see corresponding Javadoc for more information.
[warn]       && conf.get(ParquetOutputFormat.ENABLE_JOB_SUMMARY) == null) {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:360: class ParquetInputSplit in package hadoop is deprecated: see corresponding Javadoc for more information.
[warn]         new org.apache.parquet.hadoop.ParquetInputSplit(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:371: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]         ParquetFileReader.readFooter(sharedConf, filePath, SKIP_ROW_GROUPS).getFileMetaData
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:544: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]           ParquetFileReader.readFooter(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:138: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn] object OneHotEncoder extends DefaultParamsReadable[OneHotEncoder] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:141: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]   override def load(path: String): OneHotEncoder = super.load(path)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:53: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path
[warn]       if (addedClasspath != "") {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:54: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path
[warn]         settings.classpath append addedClasspath
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/assembly/target/scala-2.11/jars/spark-assembly_2.11-2.4.9-SNAPSHOT.jar ...
[info] Done packaging.
[success] Total time: 18 s, completed May 7, 2021 9:39:05 AM

========================================================================
Running Java style checks
========================================================================
Checkstyle checks passed.

========================================================================
Running Spark unit tests
========================================================================
[info] Running Spark tests using SBT with these arguments:  -Phadoop-2.6 -Pkubernetes -Pflume -Phive-thriftserver -Pyarn -Pkafka-0-8 -Pspark-ganglia-lgpl -Pkinesis-asl -Phive -Pmesos test
Using /usr/java/latest as default JAVA_HOME.
Note, this will be overridden by -java-home if it is set.
[info] Loading project definition from /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/project
[info] Set current project to spark-parent (in build file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/)
[info] ScalaTest
[info] Run completed in 61 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] ScalaTest
[info] Run completed in 96 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] ScalaTest
[info] Run completed in 106 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}repl...
[info] ScalaTest
[info] Run completed in 31 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] BitArraySuite:
[info] SparkSinkSuite:
[info] - error case when create BitArray (18 milliseconds)
[info] - bitSize (4 milliseconds)
[info] - set (2 milliseconds)
[info] Test run started
[info] - normal operation (14 milliseconds)
[info] Test run started
[info] - merge (23 milliseconds)
[info] Test org.apache.spark.launcher.InProcessLauncherSuite.testKill started
[info] BloomFilterSuite:
[info] - accuracy - Byte (14 milliseconds)
[info] - mergeInPlace - Byte (7 milliseconds)
[info] - accuracy - Short (11 milliseconds)
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.copyIndexDescendingWithStart started
[info] - mergeInPlace - Short (18 milliseconds)
[info] - accuracy - Int (51 milliseconds)
[info] Test org.apache.spark.launcher.InProcessLauncherSuite.testLauncher started
[info] Test org.apache.spark.launcher.InProcessLauncherSuite.testErrorPropagation started
[info] Test run finished: 0 failed, 0 ignored, 3 total, 0.179s
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.refIndexWithStart started
[info] Test run started
[info] Test org.apache.spark.launcher.SparkSubmitOptionParserSuite.testMissingArg started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.numericIndexDescendingWithStart started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.copyIndexDescending started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.childIndexWithStart started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.childIndexWithLast started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.childIndexWithSkip started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.childIndexWithMax started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.naturalIndexDescending started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.numericIndexDescendingWithLast started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.childIndexDescending started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.copyIndexDescendingWithLast started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.refIndex started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.numericIndexWithLast started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.copyIndexWithStart started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.childIndexDescendingWithStart started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.refIndexWithLast started
[info] - mergeInPlace - Int (189 milliseconds)
[info] Test org.apache.spark.launcher.SparkSubmitOptionParserSuite.testAllOptions started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.refIndexWithSkip started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.numericIndexDescending started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.testRefWithIntNaturalKey started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.refIndexDescending started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.refIndexDescendingWithStart started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.naturalIndexWithMax started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.naturalIndex started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.copyIndexWithLast started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.copyIndexWithSkip started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.copyIndexWithMax started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.childIndexDescendingWithLast started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.refIndexDescendingWithLast started
[info] - accuracy - Long (42 milliseconds)
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.naturalIndexDescendingWithStart started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.childIndex started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.naturalIndexWithLast started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.naturalIndexWithSkip started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.naturalIndexWithStart started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.copyIndex started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.naturalIndexDescendingWithLast started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.numericIndexWithStart started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.numericIndex started
[info] Test run finished: 0 failed, 0 ignored, 38 total, 0.394s
[info] Test run started
[info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testDuplicateIndex started
[info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testEmptyIndexName started
[info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testIndexAnnotation started
[info] Test org.apache.spark.launcher.SparkSubmitOptionParserSuite.testEqualSeparatedOption started
[info] Test org.apache.spark.launcher.SparkSubmitOptionParserSuite.testExtraOptions started
[info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testNumEncoding started
[info] Test run finished: 0 failed, 0 ignored, 4 total, 0.247s
[info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testIllegalIndexMethod started
[info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testKeyClashes started
[info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testArrayIndices started
[info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testNoNaturalIndex2 started
[info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testIllegalIndexName started
[info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testNoNaturalIndex started
[info] Test run finished: 0 failed, 0 ignored, 10 total, 0.016s
[info] Test run started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.copyIndexDescendingWithStart started
[info] Test run started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testCliParser started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testPySparkLauncher started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testAlternateSyntaxParsing started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testExamplesRunner started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testSparkRShell started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testMissingAppResource started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testShellCliParser started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testClusterCmdBuilder started
[info] - mergeInPlace - Long (102 milliseconds)
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testDriverCmdBuilder started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testExamplesRunnerNoMainClass started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testCliKillAndStatus started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testExamplesRunnerNoArg started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testPySparkFallback started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testExamplesRunnerWithMasterNoMainClass started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testCliHelpAndNoArg started
[info] Test run finished: 0 failed, 0 ignored, 15 total, 0.133s
[info] Test run started
[info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testNoRedirectToLog started
[info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testProcMonitorWithOutputRedirection started
[info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testRedirectOutputToLog started
[info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testRedirectsSimple started
[info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testRedirectErrorToLog started
[info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testProcMonitorWithLogRedirection started
[info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testFailedChildProc started
[info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testRedirectErrorTwiceFails started
[info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testBadLogRedirect started
[info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testRedirectLastWins started
[info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testRedirectToLog started
[info] Test run finished: 0 failed, 0 ignored, 11 total, 0.177s
[info] Test run started
[info] Test org.apache.spark.launcher.CommandBuilderUtilsSuite.testValidOptionStrings started
[info] Test org.apache.spark.launcher.CommandBuilderUtilsSuite.testJavaMajorVersion started
[info] Test org.apache.spark.launcher.CommandBuilderUtilsSuite.testPythonArgQuoting started
[info] Test org.apache.spark.launcher.CommandBuilderUtilsSuite.testWindowsBatchQuoting started
[info] Test org.apache.spark.launcher.CommandBuilderUtilsSuite.testInvalidOptionStrings started
[info] Test run finished: 0 failed, 0 ignored, 5 total, 0.008s
[info] Test run started
[info] Test org.apache.spark.launcher.LauncherServerSuite.testTimeout started
[info] Test org.apache.spark.launcher.LauncherServerSuite.testStreamFiltering started
[info] Test org.apache.spark.launcher.LauncherServerSuite.testSparkSubmitVmShutsDown started
[info] Test org.apache.spark.launcher.LauncherServerSuite.testLauncherServerReuse started
[info] Test org.apache.spark.launcher.LauncherServerSuite.testAppHandleDisconnect started
[info] Test org.apache.spark.launcher.LauncherServerSuite.testCommunication started
[info] Test run finished: 0 failed, 0 ignored, 6 total, 0.087s
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.refIndexWithStart started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.numericIndexDescendingWithStart started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.copyIndexDescending started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.childIndexWithStart started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.childIndexWithLast started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.childIndexWithSkip started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.childIndexWithMax started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.naturalIndexDescending started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.numericIndexDescendingWithLast started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.childIndexDescending started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.copyIndexDescendingWithLast started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.refIndex started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.numericIndexWithLast started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.copyIndexWithStart started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.childIndexDescendingWithStart started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.refIndexWithLast started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.refIndexWithSkip started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.numericIndexDescending started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.testRefWithIntNaturalKey started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.refIndexDescending started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.refIndexDescendingWithStart started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.naturalIndexWithMax started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.naturalIndex started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.copyIndexWithLast started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.copyIndexWithSkip started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.copyIndexWithMax started
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/SSLOptions.scala:71: constructor SslContextFactory in class SslContextFactory is deprecated: see corresponding Javadoc for more information.
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.childIndexDescendingWithLast started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.refIndexDescendingWithLast started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.naturalIndexDescendingWithStart started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.childIndex started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.naturalIndexWithLast started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.naturalIndexWithSkip started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.naturalIndexWithStart started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.copyIndex started
[warn]       val sslContextFactory = new SslContextFactory()
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:161: method isRunningLocally in class TaskContext is deprecated: Local execution was removed, so this always returns false
[warn]   override def isRunningLocally(): Boolean = taskContext.isRunningLocally()
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/scheduler/StageInfo.scala:59: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]   def attemptNumber(): Int = attemptId
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/api/r/RBackend.scala:102: method childGroup in class ServerBootstrap is deprecated: see corresponding Javadoc for more information.
[warn]     if (bootstrap != null && bootstrap.childGroup() != null) {
[warn] 
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.naturalIndexDescendingWithLast started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.numericIndexWithStart started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.numericIndex started
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Test run finished: 0 failed, 0 ignored, 38 total, 1.195s
[info] Test run started
[info] Test org.apache.spark.util.kvstore.ArrayWrappersSuite.testGenericArrayKey started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.002s
[info] Test run started
[info] Test org.apache.spark.util.kvstore.LevelDBBenchmark ignored
[info] Test run finished: 0 failed, 1 ignored, 0 total, 0.001s
[info] Test run started
[info] Test org.apache.spark.util.kvstore.InMemoryStoreSuite.testBasicIteration started
[info] Test org.apache.spark.util.kvstore.InMemoryStoreSuite.testObjectWriteReadDelete started
[info] Test org.apache.spark.util.kvstore.InMemoryStoreSuite.testMultipleObjectWriteReadDelete started
[info] Test org.apache.spark.util.kvstore.InMemoryStoreSuite.testMetadata started
[info] Test org.apache.spark.util.kvstore.InMemoryStoreSuite.testArrayIndices started
[info] Test org.apache.spark.util.kvstore.InMemoryStoreSuite.testUpdate started
[info] Test org.apache.spark.util.kvstore.InMemoryStoreSuite.testRemoveAll started
[info] Test run finished: 0 failed, 0 ignored, 7 total, 0.017s
[info] Test run started
[info] Test org.apache.spark.util.kvstore.LevelDBSuite.testMultipleTypesWriteReadDelete started
[info] Test org.apache.spark.util.kvstore.LevelDBSuite.testObjectWriteReadDelete started
[info] Test org.apache.spark.util.kvstore.LevelDBSuite.testSkip started
[info] Test org.apache.spark.util.kvstore.LevelDBSuite.testMultipleObjectWriteReadDelete started
[info] Test org.apache.spark.util.kvstore.LevelDBSuite.testReopenAndVersionCheckDb started
[info] Test org.apache.spark.util.kvstore.LevelDBSuite.testMetadata started
[info] Test org.apache.spark.util.kvstore.LevelDBSuite.testUpdate started
[info] Test org.apache.spark.util.kvstore.LevelDBSuite.testRemoveAll started
[info] Test org.apache.spark.util.kvstore.LevelDBSuite.testNegativeIndexValues started
[info] Test run finished: 0 failed, 0 ignored, 9 total, 0.113s
[info] - Success with ack (2 seconds, 13 milliseconds)
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/spark-core_2.11-2.4.9-SNAPSHOT.jar ...
[info] UTF8StringPropertyCheckSuite:
[info] - toString (137 milliseconds)
[info] - numChars (12 milliseconds)
[info] - startsWith (21 milliseconds)
[info] - endsWith (13 milliseconds)
[info] - toUpperCase (6 milliseconds)
[info] - toLowerCase (8 milliseconds)
[info] - compare (12 milliseconds)
[info] - substring (48 milliseconds)
[info] - contains (38 milliseconds)
[info] - trim, trimLeft, trimRight (14 milliseconds)
[info] - reverse (4 milliseconds)
[info] - indexOf (20 milliseconds)
[info] - repeat (11 milliseconds)
[info] - lpad, rpad (5 milliseconds)
[info] - concat (58 milliseconds)
[info] - concatWs (42 milliseconds)
[info] - split !!! IGNORED !!!
[info] - levenshteinDistance (14 milliseconds)
[info] - hashCode (4 milliseconds)
[info] - equals (3 milliseconds)
[info] TestingUtilsSuite:
[info] Test run started
[info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.addTest started
[info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.fromStringTest started
[info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.equalsTest started
[info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.fromYearMonthStringTest started
[info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.toStringTest started
[info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.subtractTest started
[info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.fromCaseInsensitiveStringTest started
[info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.fromSingleUnitStringTest started
[info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.fromDayTimeStringTest started
[info] Test run finished: 0 failed, 0 ignored, 9 total, 0.019s
[info] - Comparing doubles using relative error. (30 milliseconds)
[info] - Comparing doubles using absolute error. (5 milliseconds)
[info] Test run started
[info] Test org.apache.spark.unsafe.array.LongArraySuite.basicTest started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.002s
[info] Test run started
[info] Test org.apache.spark.unsafe.PlatformUtilSuite.freeingOnHeapMemoryBlockResetsBaseObjectAndOffset started
[info] Test org.apache.spark.unsafe.PlatformUtilSuite.overlappingCopyMemory started
[info] - Comparing vectors using relative error. (25 milliseconds)
[info] - Comparing vectors using absolute error. (6 milliseconds)
[info] Test org.apache.spark.unsafe.PlatformUtilSuite.memoryDebugFillEnabledInTest started
[info] Test org.apache.spark.unsafe.PlatformUtilSuite.offHeapMemoryAllocatorThrowsAssertionErrorOnDoubleFree started
[info] Test org.apache.spark.unsafe.PlatformUtilSuite.heapMemoryReuse started
[info] Test org.apache.spark.unsafe.PlatformUtilSuite.onHeapMemoryAllocatorPoolingReUsesLongArrays started
[info] Test org.apache.spark.unsafe.PlatformUtilSuite.onHeapMemoryAllocatorThrowsAssertionErrorOnDoubleFree started
[info] Test org.apache.spark.unsafe.PlatformUtilSuite.freeingOffHeapMemoryBlockResetsOffset started
[info] Test run finished: 0 failed, 0 ignored, 8 total, 0.061s
[info] Test run started
[info] Test org.apache.spark.unsafe.hash.Murmur3_x86_32Suite.testKnownLongInputs started
[info] Test org.apache.spark.unsafe.hash.Murmur3_x86_32Suite.testKnownIntegerInputs started
[info] Test org.apache.spark.unsafe.hash.Murmur3_x86_32Suite.randomizedStressTest started
[info] Test org.apache.spark.unsafe.hash.Murmur3_x86_32Suite.testKnownBytesInputs started
[info] Test org.apache.spark.unsafe.hash.Murmur3_x86_32Suite.randomizedStressTestPaddedStrings started
[info] Test org.apache.spark.unsafe.hash.Murmur3_x86_32Suite.randomizedStressTestBytes started
[info] - Comparing Matrices using absolute error. (311 milliseconds)
[info] - Comparing Matrices using relative error. (9 milliseconds)
[info] Test run finished: 0 failed, 0 ignored, 6 total, 0.338s
[info] Test run started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.titleCase started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.concatTest started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.soundex started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.basicTest started
[info] UtilsSuite:
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.writeToOutputStreamUnderflow started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.testToShort started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.startsWith started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.compareTo started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.levenshteinDistance started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.writeToOutputStreamOverflow started
[info] - EPSILON (4 milliseconds)
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.writeToOutputStreamIntArray started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.upperAndLower started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.testToInt started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.createBlankString started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.prefix started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.concatWsTest started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.repeat started
[info] MatricesSuite:
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.contains started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.skipWrongFirstByte started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.emptyStringTest started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.writeToOutputStreamSlice started
[info] - dense matrix construction (1 millisecond)
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.trimBothWithTrimString started
[info] - dense matrix construction with wrong dimension (1 millisecond)
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.substringSQL started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.substring_index started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.pad started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.split started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.trims started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.trimRightWithTrimString started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.findInSet started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.substring started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.translate started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.reverse started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.trimLeftWithTrimString started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.endsWith started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.testToByte started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.testToLong started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.writeToOutputStream started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.indexOf started
[info] Test run finished: 0 failed, 0 ignored, 38 total, 0.054s
[info] - sparse matrix construction (173 milliseconds)
[info] - sparse matrix construction with wrong number of elements (1 millisecond)
[info] - index in matrices incorrect input (8 milliseconds)
[info] - equals (24 milliseconds)
[info] - matrix copies are deep copies (1 millisecond)
[info] - matrix indexing and updating (2 milliseconds)
[info] - dense to dense (3 milliseconds)
[info] - Failure with nack (1 second, 478 milliseconds)
[info] - dense to sparse (3 milliseconds)
[info] - sparse to sparse (6 milliseconds)
[info] - sparse to dense (3 milliseconds)
[info] - compressed dense (6 milliseconds)
[info] - compressed sparse (3 milliseconds)
[info] - map, update (2 milliseconds)
[info] - transpose (1 millisecond)
[info] - foreachActive (2 milliseconds)
[info] - horzcat, vertcat, eye, speye (13 milliseconds)
[info] - zeros (1 millisecond)
[info] - ones (1 millisecond)
[info] - eye (0 milliseconds)
[info] - rand (165 milliseconds)
[info] - randn (3 milliseconds)
[info] - diag (1 millisecond)
[info] - sprand (9 milliseconds)
[info] - sprandn (4 milliseconds)
[info] - toString (18 milliseconds)
[info] - numNonzeros and numActives (0 milliseconds)
[info] - fromBreeze with sparse matrix (16 milliseconds)
May 07, 2021 9:39:35 AM com.github.fommil.netlib.BLAS <clinit>
WARNING: Failed to load implementation from: com.github.fommil.netlib.NativeSystemBLAS
May 07, 2021 9:39:36 AM com.github.fommil.netlib.BLAS <clinit>
WARNING: Failed to load implementation from: com.github.fommil.netlib.NativeRefBLAS
[info] - row/col iterator (44 milliseconds)
[info] BreezeMatrixConversionSuite:
[info] - dense matrix to breeze (1 millisecond)
[info] - dense breeze matrix to matrix (1 millisecond)
[info] - sparse matrix to breeze (1 millisecond)
[info] - sparse breeze matrix to sparse matrix (1 millisecond)
[info] MultivariateGaussianSuite:
[info] - accuracy - String (3 seconds, 490 milliseconds)
[info] Test run started
[info] Test org.apache.spark.network.crypto.AuthEngineSuite.testBadChallenge started
May 07, 2021 9:39:36 AM com.github.fommil.netlib.LAPACK <clinit>
WARNING: Failed to load implementation from: com.github.fommil.netlib.NativeSystemLAPACK
May 07, 2021 9:39:36 AM com.github.fommil.netlib.LAPACK <clinit>
WARNING: Failed to load implementation from: com.github.fommil.netlib.NativeRefLAPACK
[info] - univariate (442 milliseconds)
[info] - multivariate (12 milliseconds)
[info] - multivariate degenerate (1 millisecond)
[info] - SPARK-11302 (7 milliseconds)
[info] BreezeVectorConversionSuite:
[info] - dense to breeze (1 millisecond)
[info] Test org.apache.spark.network.crypto.AuthEngineSuite.testWrongAppId started
[info] Test org.apache.spark.network.crypto.AuthEngineSuite.testWrongNonce started
[info] Test org.apache.spark.network.crypto.AuthEngineSuite.testMismatchedSecret started
[info] Test org.apache.spark.network.crypto.AuthEngineSuite.testEncryptedMessage started
[info] - sparse to breeze (166 milliseconds)
[info] - dense breeze to vector (0 milliseconds)
[info] - sparse breeze to vector (1 millisecond)
[info] Done packaging.
[info] - sparse breeze with partially-used arrays to vector (1 millisecond)
[info] VectorsSuite:
[info] - dense vector construction with varargs (1 millisecond)
[info] - dense vector construction from a double array (1 millisecond)
[info] - sparse vector construction (1 millisecond)
[info] - sparse vector construction with unordered elements (3 milliseconds)
[info] - sparse vector construction with mismatched indices/values array (2 milliseconds)
[info] - sparse vector construction with too many indices vs size (1 millisecond)
[info] - sparse vector construction with negative indices (1 millisecond)
[info] - dense to array (1 millisecond)
[info] - dense argmax (2 milliseconds)
[info] - sparse to array (1 millisecond)
[info] - sparse argmax (2 milliseconds)
[info] Test org.apache.spark.network.crypto.AuthEngineSuite.testEncryptedMessageWhenTransferringZeroBytes started
[info] - vector equals (4 milliseconds)
[info] - vectors equals with explicit 0 (3 milliseconds)
[info] - indexing dense vectors (2 milliseconds)
[info] - indexing sparse vectors (1 millisecond)
[info] - zeros (1 millisecond)
[info] - Vector.copy (1 millisecond)
[info] - fromBreeze (2 milliseconds)
[info] - sqdist (66 milliseconds)
[info] - foreachActive (3 milliseconds)
[info] - vector p-norm (6 milliseconds)
[info] - Vector numActive and numNonzeros (3 milliseconds)
[info] - Vector toSparse and toDense (2 milliseconds)
[info] - Vector.compressed (0 milliseconds)
[info] - SparseVector.slice (1 millisecond)
[info] - sparse vector only support non-negative length (2 milliseconds)
[info] BLASSuite:
[info] - copy (6 milliseconds)
[info] - scal (0 milliseconds)
[info] - axpy (3 milliseconds)
[info] - dot (2 milliseconds)
[info] - spr (2 milliseconds)
[info] - syr (8 milliseconds)
[info] - gemm (6 milliseconds)
[info] - gemv (5 milliseconds)
[info] - spmv (1 millisecond)
[info] - Failure with timeout (1 second, 235 milliseconds)
[info] Test org.apache.spark.network.crypto.AuthEngineSuite.testAuthEngine started
[info] Test org.apache.spark.network.crypto.AuthEngineSuite.testBadKeySize started
[info] Test run finished: 0 failed, 0 ignored, 8 total, 0.611s
[info] Test run started
[info] Test org.apache.spark.network.server.OneForOneStreamManagerSuite.streamStatesAreFreedWhenConnectionIsClosedEvenIfBufferIteratorThrowsException started
[info] Test org.apache.spark.network.server.OneForOneStreamManagerSuite.managedBuffersAreFeedWhenConnectionIsClosed started
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Test run finished: 0 failed, 0 ignored, 2 total, 0.101s
[info] Test run started
[info] Test org.apache.spark.network.RequestTimeoutIntegrationSuite.furtherRequestsDelay started
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:87: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       fwInfoBuilder.setRole(role)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:224: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]     role.foreach { r => builder.setRole(r) }
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:260: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]             Option(r.getRole), reservation)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:263: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]             Option(r.getRole), reservation)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:521: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       role.foreach { r => builder.setRole(r) }
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:537: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]         (RoleResourceInfo(resource.getRole, reservation),
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/DirectKafkaInputDStream.scala:172: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]     val msgs = c.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:100: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:153: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaDataConsumer.scala:200: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     val p = consumer.poll(timeout)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:259: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val dstream = FlumeUtils.createStream(jssc, hostname, port, storageLevel, enableDecompression)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:275: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val dstream = FlumeUtils.createPollingStream(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createPollingStream(ssc, host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createPollingStream(ssc, host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumeEventCount.scala:59: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createStream(ssc, host, port, StorageLevel.MEMORY_ONLY_SER_2)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:633: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder](
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:647: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:648: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker]): JavaRDD[(Array[Byte], Array[Byte])] = {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:657: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:658: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker]): JavaRDD[Array[Byte]] = {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:670: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:671: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:673: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createRDD[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V](
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:676: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges.toArray(new Array[OffsetRange](offsetRanges.size())),
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:720: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val kc = new KafkaCluster(Map(kafkaParams.asScala.toSeq: _*))
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:721: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       KafkaUtils.getFromOffsets(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:725: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createDirectStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V](
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def createBroker(host: String, port: JInt): Broker = Broker(host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: object Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def createBroker(host: String, port: JInt): Broker = Broker(host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:740: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def offsetRangesOfKafkaRDD(rdd: RDD[_]): JList[OffsetRange] = {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:89: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   protected val kc = new KafkaCluster(kafkaParams)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:172: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       OffsetRange(tp.topic, tp.partition, fo, uo.offset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:217: object KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val leaders = KafkaCluster.checkErrors(kc.findLeaders(topics))
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:222: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]            context.sparkContext, kafkaParams, b.map(OffsetRange(_)), leaders, messageHandler)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:58: trait HasOffsetRanges in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   ) extends RDD[R](sc, Nil) with Logging with HasOffsetRanges {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val offsetRanges: Array[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val offsetRanges: Array[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:152: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val kc = new KafkaCluster(kafkaParams)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:268: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]         OffsetRange(tp.topic, tp.partition, fo, uo.offset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:140: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]   private val client = new AmazonKinesisClient(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:150: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]   client.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:219: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information.
[warn]     getRecordsRequest.setRequestCredentials(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:240: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information.
[warn]     getShardIteratorRequest.setRequestCredentials(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:606: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]       KinesisUtils.createStream(jssc.ssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:613: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]         KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:616: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]         KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:107: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val kinesisClient = new AmazonKinesisClient(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:108: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     kinesisClient.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:223: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val kinesisClient = new AmazonKinesisClient(new DefaultAWSCredentialsProviderChain())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:224: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     kinesisClient.setEndpoint(endpoint)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/SparkAWSCredentials.scala:76: method withLongLivedCredentialsProvider in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       .withLongLivedCredentialsProvider(longLivedCreds.provider)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:58: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val client = new AmazonKinesisClient(KinesisTestUtils.getAWSCredentials())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:59: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     client.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:64: constructor AmazonDynamoDBClient in class AmazonDynamoDBClient is deprecated: see corresponding Javadoc for more information.
[warn]     val dynamoDBClient = new AmazonDynamoDBClient(new DefaultAWSCredentialsProviderChain())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:65: method setRegion in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     dynamoDBClient.setRegion(RegionUtils.getRegion(regionName))
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisReceiver.scala:187: constructor Worker in class Worker is deprecated: see corresponding Javadoc for more information.
[warn]     worker = new Worker(recordProcessorFactory, kinesisClientLibConfiguration)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Test run started
[info] ScalaTest
[info] ScalaTest
[info] Run completed in 29 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] Run completed in 37 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] ScalaTest
[info] Run completed in 42 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] ScalaTest
[info] Run completed in 25 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] - Multiple consumers (1 second, 580 milliseconds)
[info] Test org.apache.spark.network.shuffle.ExternalShuffleIntegrationSuite.testFetchUnregisteredExecutor started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleIntegrationSuite.testFetchWrongExecutor started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleIntegrationSuite.testFetchNoServer started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleIntegrationSuite.testRegisterInvalidExecutor started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleIntegrationSuite.testFetchThreeSort started
[info] - mergeInPlace - String (3 seconds)
[info] - incompatible merge (2 milliseconds)
[info] CountMinSketchSuite:
[info] Test org.apache.spark.network.shuffle.ExternalShuffleIntegrationSuite.testFetchWrongBlockId started
[info] - accuracy - Byte (262 milliseconds)
[info] Test org.apache.spark.network.shuffle.ExternalShuffleIntegrationSuite.testFetchNonexistent started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleIntegrationSuite.testFetchOneSort started
[info] Test run finished: 0 failed, 0 ignored, 8 total, 1.732s
[info] Test run started
[info] Test org.apache.spark.network.shuffle.RetryingBlockFetcherSuite.testRetryAndUnrecoverable started
[info] - mergeInPlace - Byte (223 milliseconds)
[info] - Multiple consumers with some failures (1 second, 342 milliseconds)
[info] Test org.apache.spark.network.shuffle.RetryingBlockFetcherSuite.testSingleIOExceptionOnFirst started
[info] Test org.apache.spark.network.shuffle.RetryingBlockFetcherSuite.testUnrecoverableFailure started
[info] Test org.apache.spark.network.shuffle.RetryingBlockFetcherSuite.testSingleIOExceptionOnSecond started
[info] Test org.apache.spark.network.shuffle.RetryingBlockFetcherSuite.testThreeIOExceptions started
[info] Test org.apache.spark.network.shuffle.RetryingBlockFetcherSuite.testNoFailures started
[info] Test org.apache.spark.network.shuffle.RetryingBlockFetcherSuite.testTwoIOExceptions started
[info] Test run finished: 0 failed, 0 ignored, 7 total, 0.234s
[info] ScalaTest
[info] Run completed in 8 seconds, 997 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] Passed: Total 44, Failed 0, Errors 0, Passed 44
[info] Test run started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleSecuritySuite.testBadSecret started
[info] ScalaTest
[info] Run completed in 9 seconds, 25 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] Passed: Total 104, Failed 0, Errors 0, Passed 103, Skipped 1
[info] ScalaTest
[info] Run completed in 9 seconds, 31 milliseconds.
[info] Total number of tests run: 19
[info] Suites: completed 1, aborted 0
[info] Tests: succeeded 19, failed 0, canceled 0, ignored 1, pending 0
[info] All tests passed.
[info] Passed: Total 81, Failed 0, Errors 0, Passed 81, Ignored 1
[info] ScalaTest
[info] Run completed in 8 seconds, 985 milliseconds.
[info] Total number of tests run: 85
[info] Suites: completed 8, aborted 0
[info] Tests: succeeded 85, failed 0, canceled 0, ignored 0, pending 0
[info] All tests passed.
[info] Passed: Total 85, Failed 0, Errors 0, Passed 85
[info] Test org.apache.spark.network.shuffle.ExternalShuffleSecuritySuite.testBadAppId started
[info] ScalaTest
[info] Run completed in 9 seconds, 110 milliseconds.
[info] Total number of tests run: 5
[info] Suites: completed 1, aborted 0
[info] Tests: succeeded 5, failed 0, canceled 0, ignored 0, pending 0
[info] All tests passed.
[info] Passed: Total 5, Failed 0, Errors 0, Passed 5
[info] ScalaTest
[info] Run completed in 3 seconds, 51 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] - accuracy - Short (704 milliseconds)
[info] - mergeInPlace - Short (357 milliseconds)
[info] - accuracy - Int (815 milliseconds)
[info] Test org.apache.spark.network.shuffle.ExternalShuffleSecuritySuite.testValid started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleSecuritySuite.testEncryption started
[info] Test run finished: 0 failed, 0 ignored, 4 total, 2.044s
[info] Test run started
[info] Test org.apache.spark.network.sasl.SaslIntegrationSuite.testGoodClient started
[info] - mergeInPlace - Int (489 milliseconds)
[info] Test org.apache.spark.network.sasl.SaslIntegrationSuite.testNoSaslClient started
[info] Test org.apache.spark.network.sasl.SaslIntegrationSuite.testNoSaslServer started
[info] Test org.apache.spark.network.sasl.SaslIntegrationSuite.testBadClient started
[info] Test run finished: 0 failed, 0 ignored, 4 total, 0.519s
[info] Test run started
[info] Test org.apache.spark.network.sasl.ShuffleSecretManagerSuite.testMultipleRegisters started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.003s
[info] Test run started
[info] Test org.apache.spark.network.shuffle.NonShuffleFilesCleanupSuite.cleanupUsesExecutorWithShuffleFiles started
[info] Test org.apache.spark.network.shuffle.NonShuffleFilesCleanupSuite.cleanupOnlyRegisteredExecutorWithShuffleFiles started
[info] Test org.apache.spark.network.shuffle.NonShuffleFilesCleanupSuite.cleanupOnRemovedExecutorWithShuffleFiles started
[info] Test org.apache.spark.network.shuffle.NonShuffleFilesCleanupSuite.cleanupOnlyRemovedExecutorWithoutShuffleFiles started
[info] Test org.apache.spark.network.shuffle.NonShuffleFilesCleanupSuite.cleanupOnRemovedExecutorWithoutShuffleFiles started
[info] Test org.apache.spark.network.shuffle.NonShuffleFilesCleanupSuite.cleanupOnlyRemovedExecutorWithShuffleFiles started
[info] Test org.apache.spark.network.shuffle.NonShuffleFilesCleanupSuite.cleanupOnlyRegisteredExecutorWithoutShuffleFiles started
[info] Test org.apache.spark.network.shuffle.NonShuffleFilesCleanupSuite.cleanupUsesExecutorWithoutShuffleFiles started
[info] Test run finished: 0 failed, 0 ignored, 8 total, 0.203s
[info] Test run started
[info] Test org.apache.spark.network.shuffle.AppIsolationSuite.testSaslAppIsolation started
[info] - accuracy - Long (758 milliseconds)
[info] Test org.apache.spark.network.shuffle.AppIsolationSuite.testAuthEngineAppIsolation started
[info] JdbcRDDSuite:
[info] - mergeInPlace - Long (342 milliseconds)
[info] Test run finished: 0 failed, 0 ignored, 2 total, 0.812s
[info] Test run started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleBlockResolverSuite.testNormalizeAndInternPathname started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleBlockResolverSuite.testSortShuffleBlocks started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleBlockResolverSuite.testBadRequests started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleBlockResolverSuite.jsonSerializationOfExecutorRegistration started
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9
[warn] 	    +- org.apache.arrow:arrow-vector:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.arrow:arrow-memory:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-network-common_2.11:2.4.9-SNAPSHOT (depends on 1.3.9)
[warn] 	    +- org.apache.hadoop:hadoop-common:2.6.5              (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-unsafe_2.11:2.4.9-SNAPSHOT  (depends on 1.3.9)
[warn] 
[warn] 	* org.scala-lang.modules:scala-parser-combinators_2.11:1.1.0 is selected over 1.0.4
[warn] 	    +- org.apache.spark:spark-mllib_2.11:2.4.9-SNAPSHOT   (depends on 1.1.0)
[warn] 	    +- org.apache.spark:spark-catalyst_2.11:2.4.9-SNAPSHOT (depends on 1.1.0)
[warn] 	    +- org.scala-lang:scala-compiler:2.11.12              (depends on 1.0.4)
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Test run finished: 0 failed, 0 ignored, 4 total, 0.389s
[info] Test run started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleBlockHandlerSuite.testOpenShuffleBlocks started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleBlockHandlerSuite.testRegisterExecutor started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleBlockHandlerSuite.testBadMessages started
[info] Test run finished: 0 failed, 0 ignored, 3 total, 0.043s
[info] Test run started
[info] Test org.apache.spark.network.shuffle.OneForOneBlockFetcherSuite.testEmptyBlockFetch started
[info] Test org.apache.spark.network.shuffle.OneForOneBlockFetcherSuite.testFailure started
[info] Test org.apache.spark.network.shuffle.OneForOneBlockFetcherSuite.testFailureAndSuccess started
[info] Test org.apache.spark.network.shuffle.OneForOneBlockFetcherSuite.testFetchThree started
[info] Test org.apache.spark.network.shuffle.OneForOneBlockFetcherSuite.testFetchOne started
[info] Test run finished: 0 failed, 0 ignored, 5 total, 0.008s
[info] Test run started
[info] Test org.apache.spark.network.shuffle.BlockTransferMessagesSuite.serializeOpenShuffleBlocks started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.001s
[info] Test run started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleCleanupSuite.cleanupOnlyRemovedApp started
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}assembly...
[info] Test org.apache.spark.network.shuffle.ExternalShuffleCleanupSuite.cleanupUsesExecutor started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleCleanupSuite.noCleanupAndCleanup started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleCleanupSuite.cleanupMultipleExecutors started
[info] Test run finished: 0 failed, 0 ignored, 4 total, 1.247s
[info] - accuracy - String (2 seconds, 723 milliseconds)
[info] DistributedSuite:
[info] - basic functionality (3 seconds, 193 milliseconds)
[info] - mergeInPlace - String (2 seconds, 635 milliseconds)
[info] - large id overflow (681 milliseconds)
[info] SparkUncaughtExceptionHandlerSuite:
[info] Test org.apache.spark.network.RequestTimeoutIntegrationSuite.timeoutCleanlyClosesClient started
[info] - SPARK-30310: Test uncaught RuntimeException, exitOnUncaughtException = true (1 second, 777 milliseconds)
[info] - SPARK-30310: Test uncaught RuntimeException, exitOnUncaughtException = false (2 seconds, 141 milliseconds)
[info] - accuracy - Byte array (5 seconds, 700 milliseconds)
[info] - SPARK-30310: Test uncaught OutOfMemoryError, exitOnUncaughtException = true (1 second, 810 milliseconds)
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9
[warn] 	    +- org.apache.arrow:arrow-vector:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.arrow:arrow-memory:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-network-common_2.11:2.4.9-SNAPSHOT (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-hive_2.11:2.4.9-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.hadoop:hadoop-common:2.6.5              (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-unsafe_2.11:2.4.9-SNAPSHOT  (depends on 1.3.9)
[warn] 
[warn] 	* org.scala-lang.modules:scala-parser-combinators_2.11:1.1.0 is selected over 1.0.4
[warn] 	    +- org.scala-lang:scala-compiler:2.11.12              (depends on 1.0.4)
[warn] 	    +- org.apache.spark:spark-mllib_2.11:2.4.9-SNAPSHOT   (depends on 1.0.4)
[warn] 	    +- org.apache.spark:spark-catalyst_2.11:2.4.9-SNAPSHOT (depends on 1.0.4)
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.9-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:128: value ENABLE_JOB_SUMMARY in object ParquetOutputFormat is deprecated: see corresponding Javadoc for more information.
[warn]       && conf.get(ParquetOutputFormat.ENABLE_JOB_SUMMARY) == null) {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:360: class ParquetInputSplit in package hadoop is deprecated: see corresponding Javadoc for more information.
[warn]         new org.apache.parquet.hadoop.ParquetInputSplit(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:371: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]         ParquetFileReader.readFooter(sharedConf, filePath, SKIP_ROW_GROUPS).getFileMetaData
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:544: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]           ParquetFileReader.readFooter(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[info] - task throws not serializable exception (8 seconds, 517 milliseconds)
[info] - local-cluster format (6 milliseconds)
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:119: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]     consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:139: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:187: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]       consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:217: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]       consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:293: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]           consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaDataConsumer.scala:483: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     val p = consumer.poll(pollTimeoutMs)
[warn] 
[info] - SPARK-30310: Test uncaught OutOfMemoryError, exitOnUncaughtException = false (2 seconds, 3 milliseconds)
[info] - mergeInPlace - Byte array (2 seconds, 933 milliseconds)
[info] - incompatible merge (2 milliseconds)
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:138: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn] object OneHotEncoder extends DefaultParamsReadable[OneHotEncoder] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:141: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]   override def load(path: String): OneHotEncoder = super.load(path)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:53: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path
[warn]       if (addedClasspath != "") {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:54: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path
[warn]         settings.classpath append addedClasspath
[warn] 
[info] - SPARK-30310: Test uncaught SparkFatalException(RuntimeException), exitOnUncaughtException = true (1 second, 657 milliseconds)
[info] FlumePollingStreamSuite:
[info] Test org.apache.spark.network.RequestTimeoutIntegrationSuite.timeoutInactiveRequests started
[info] - SPARK-30310: Test uncaught SparkFatalException(RuntimeException), exitOnUncaughtException = false (1 second, 879 milliseconds)
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/examples/target/scala-2.11/jars/spark-examples_2.11-2.4.9-SNAPSHOT.jar ...
[info] - simple groupByKey (5 seconds, 146 milliseconds)
[info] Done packaging.
[info] ScalaTest
[info] Run completed in 37 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] - SPARK-30310: Test uncaught SparkFatalException(OutOfMemoryError), exitOnUncaughtException = true (2 seconds, 136 milliseconds)
[info] - SPARK-30310: Test uncaught SparkFatalException(OutOfMemoryError), exitOnUncaughtException = false (1 second, 540 milliseconds)
[info] PipedRDDSuite:
[info] - basic pipe (109 milliseconds)
[info] - basic pipe with tokenization (110 milliseconds)
[info] - failure in iterating over pipe input (74 milliseconds)
[info] - stdin writer thread should be exited when task is finished (107 milliseconds)
[info] - advanced pipe (636 milliseconds)
[info] - pipe with empty partition (132 milliseconds)
[info] - pipe with env variable (33 milliseconds)
[info] - pipe with process which cannot be launched due to bad command (36 milliseconds)
cat: nonexistent_file: No such file or directory
cat: nonexistent_file: No such file or directory
[info] - pipe with process which is launched but fails with non-zero exit status (44 milliseconds)
[info] - basic pipe with separate working directory (125 milliseconds)
[info] - test pipe exports map_input_file (87 milliseconds)
[info] - test pipe exports mapreduce_map_input_file (40 milliseconds)
[info] AccumulatorV2Suite:
[info] - LongAccumulator add/avg/sum/count/isZero (1 millisecond)
[info] - DoubleAccumulator add/avg/sum/count/isZero (1 millisecond)
[info] - ListAccumulator (0 milliseconds)
[info] - LegacyAccumulatorWrapper (2 milliseconds)
[info] - LegacyAccumulatorWrapper with AccumulatorParam that has no equals/hashCode (3 milliseconds)
[info] FileSuite:
[info] - text files (491 milliseconds)
[info] - groupByKey where map output sizes exceed maxMbInFlight (5 seconds, 134 milliseconds)
[info] - text files (compressed) (681 milliseconds)
[info] - SequenceFiles (351 milliseconds)
[info] - SequenceFile (compressed) (500 milliseconds)
[info] - SequenceFile with writable key (292 milliseconds)
[info] - SequenceFile with writable value (241 milliseconds)
[info] - SequenceFile with writable key and value (323 milliseconds)
[info] - implicit conversions in reading SequenceFiles (404 milliseconds)
[info] Test run finished: 0 failed, 0 ignored, 3 total, 32.491s
[info] Test run started
[info] Test org.apache.spark.network.ProtocolSuite.responses started
[info] - object files of ints (364 milliseconds)
[info] Test org.apache.spark.network.ProtocolSuite.requests started
[info] Test run finished: 0 failed, 0 ignored, 2 total, 0.133s
[info] Test run started
[info] Test org.apache.spark.network.TransportClientFactorySuite.reuseClientsUpToConfigVariable started
[info] - object files of complex types (244 milliseconds)
[info] Test org.apache.spark.network.TransportClientFactorySuite.reuseClientsUpToConfigVariableConcurrent started
[info] - accumulators (3 seconds, 778 milliseconds)
[info] Test org.apache.spark.network.TransportClientFactorySuite.closeFactoryBeforeCreateClient started
[info] Test org.apache.spark.network.TransportClientFactorySuite.closeBlockClientsWithFactory started
[info] - object files of classes from a JAR (1 second, 627 milliseconds)
[info] Test org.apache.spark.network.TransportClientFactorySuite.neverReturnInactiveClients started
[info] - write SequenceFile using new Hadoop API (295 milliseconds)
[info] Test org.apache.spark.network.TransportClientFactorySuite.closeIdleConnectionForRequestTimeOut started
[info] - read SequenceFile using new Hadoop API (309 milliseconds)
[info] - binary file input as byte array (203 milliseconds)
[info] - portabledatastream caching tests (280 milliseconds)
[info] - portabledatastream persist disk storage (233 milliseconds)
[info] Test org.apache.spark.network.TransportClientFactorySuite.returnDifferentClientsForDifferentServers started
[info] - portabledatastream flatmap tests (219 milliseconds)
[info] Test run finished: 0 failed, 0 ignored, 7 total, 3.847s
[info] Test run started
[info] Test org.apache.spark.network.crypto.AuthIntegrationSuite.testSaslClientFallback started
[info] Test org.apache.spark.network.crypto.AuthIntegrationSuite.testSaslServerFallback started
[info] Test org.apache.spark.network.crypto.AuthIntegrationSuite.testAuthReplay started
[info] - SPARK-22357 test binaryFiles minPartitions (486 milliseconds)
[info] Test org.apache.spark.network.crypto.AuthIntegrationSuite.testNewAuth started
[info] - minimum split size per node and per rack should be less than or equal to maxSplitSize (220 milliseconds)
[info] Test org.apache.spark.network.crypto.AuthIntegrationSuite.testLargeMessageEncryption started
[info] - fixed record length binary file as byte array (176 milliseconds)
[info] Test org.apache.spark.network.crypto.AuthIntegrationSuite.testAuthFailure started
[info] Test run finished: 0 failed, 0 ignored, 6 total, 0.894s
[info] Test run started
[info] - negative binary record length should raise an exception (128 milliseconds)
[info] Test org.apache.spark.network.RpcIntegrationSuite.sendRpcWithStreamConcurrently started
[info] Test org.apache.spark.network.RpcIntegrationSuite.sendOneWayMessage started
[info] Test org.apache.spark.network.RpcIntegrationSuite.singleRPC started
[info] Test org.apache.spark.network.RpcIntegrationSuite.throwErrorRPC started
[info] Test org.apache.spark.network.RpcIntegrationSuite.doubleTrouble started
[info] Test org.apache.spark.network.RpcIntegrationSuite.doubleRPC started
[info] - file caching (185 milliseconds)
[info] Test org.apache.spark.network.RpcIntegrationSuite.returnErrorRPC started
[info] Test org.apache.spark.network.RpcIntegrationSuite.sendRpcWithStreamFailures started
[info] - prevent user from overwriting the empty directory (old Hadoop API) (129 milliseconds)
[info] Test org.apache.spark.network.RpcIntegrationSuite.sendRpcWithStreamOneAtATime started
[info] Test org.apache.spark.network.RpcIntegrationSuite.sendSuccessAndFailure started
[info] - prevent user from overwriting the non-empty directory (old Hadoop API) (200 milliseconds)
[info] - broadcast variables (4 seconds, 321 milliseconds)
[info] - flume polling test (14 seconds, 86 milliseconds)
[info] Test run finished: 0 failed, 0 ignored, 10 total, 0.734s
[info] Test run started
[info] Test org.apache.spark.network.crypto.TransportCipherSuite.testBufferNotLeaksOnInternalError started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.013s
[info] Test run started
[info] Test org.apache.spark.network.TransportResponseHandlerSuite.failOutstandingStreamCallbackOnException started
[info] Test org.apache.spark.network.TransportResponseHandlerSuite.failOutstandingStreamCallbackOnClose started
[info] Test org.apache.spark.network.TransportResponseHandlerSuite.testActiveStreams started
[info] Test org.apache.spark.network.TransportResponseHandlerSuite.handleSuccessfulFetch started
[info] Test org.apache.spark.network.TransportResponseHandlerSuite.handleFailedRPC started
[info] Test org.apache.spark.network.TransportResponseHandlerSuite.handleFailedFetch started
[info] Test org.apache.spark.network.TransportResponseHandlerSuite.handleSuccessfulRPC started
[info] Test org.apache.spark.network.TransportResponseHandlerSuite.clearAllOutstandingRequests started
[info] Test run finished: 0 failed, 0 ignored, 8 total, 0.028s
[info] Test run started
[info] Test org.apache.spark.network.util.TransportFrameDecoderSuite.testEmptyFrame started
[info] Test org.apache.spark.network.util.TransportFrameDecoderSuite.testNegativeFrameSize started
[info] Test org.apache.spark.network.util.TransportFrameDecoderSuite.testSplitLengthField started
[info] Test org.apache.spark.network.util.TransportFrameDecoderSuite.testFrameDecoding started
[info] Test org.apache.spark.network.util.TransportFrameDecoderSuite.testInterception started
[info] Test org.apache.spark.network.util.TransportFrameDecoderSuite.testRetainedFrames started
[info] Test run finished: 0 failed, 0 ignored, 6 total, 0.082s
[info] Test run started
[info] Test org.apache.spark.network.protocol.MessageWithHeaderSuite.testDeallocateReleasesManagedBuffer started
[info] Test org.apache.spark.network.protocol.MessageWithHeaderSuite.testByteBufBody started
[info] Test org.apache.spark.network.protocol.MessageWithHeaderSuite.testShortWrite started
[info] Test org.apache.spark.network.protocol.MessageWithHeaderSuite.testCompositeByteBufBodySingleBuffer started
[info] Test org.apache.spark.network.protocol.MessageWithHeaderSuite.testCompositeByteBufBodyMultipleBuffers started
[info] Test org.apache.spark.network.protocol.MessageWithHeaderSuite.testSingleWrite started
[info] Test run finished: 0 failed, 0 ignored, 6 total, 0.007s
[info] Test run started
[info] Test org.apache.spark.network.sasl.SparkSaslSuite.testNonMatching started
[info] Test org.apache.spark.network.sasl.SparkSaslSuite.testSaslAuthentication started
[info] - allow user to disable the output directory existence checking (old Hadoop API) (308 milliseconds)
[info] Test org.apache.spark.network.sasl.SparkSaslSuite.testEncryptedMessageChunking started
[info] Test org.apache.spark.network.sasl.SparkSaslSuite.testServerAlwaysEncrypt started
[info] - prevent user from overwriting the empty directory (new Hadoop API) (94 milliseconds)
[info] Test org.apache.spark.network.sasl.SparkSaslSuite.testDataEncryptionIsActuallyEnabled started
[info] - prevent user from overwriting the non-empty directory (new Hadoop API) (218 milliseconds)
[info] Test org.apache.spark.network.sasl.SparkSaslSuite.testFileRegionEncryption started
[info] Test org.apache.spark.network.sasl.SparkSaslSuite.testSaslEncryption started
[info] - allow user to disable the output directory existence checking (new Hadoop API (191 milliseconds)
[info] Test org.apache.spark.network.sasl.SparkSaslSuite.testDelegates started
[info] Test org.apache.spark.network.sasl.SparkSaslSuite.testEncryptedMessage started
[info] Test org.apache.spark.network.sasl.SparkSaslSuite.testMatching started
[info] Test org.apache.spark.network.sasl.SparkSaslSuite.testRpcHandlerDelegate started
[info] Test run finished: 0 failed, 0 ignored, 11 total, 0.687s
[info] Test run started
[info] Test org.apache.spark.network.util.CryptoUtilsSuite.testConfConversion started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.001s
[info] Test run started
[info] Test org.apache.spark.network.TransportRequestHandlerSuite.handleFetchRequestAndStreamRequest started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.009s
[info] Test run started
[info] Test org.apache.spark.network.crypto.AuthMessagesSuite.testServerResponse started
[info] Test org.apache.spark.network.crypto.AuthMessagesSuite.testClientChallenge started
[info] Test run finished: 0 failed, 0 ignored, 2 total, 0.001s
[info] Test run started
[info] Test org.apache.spark.network.util.NettyMemoryMetricsSuite.testGeneralNettyMemoryMetrics started
[info] Test org.apache.spark.network.util.NettyMemoryMetricsSuite.testAdditionalMetrics started
[info] - save Hadoop Dataset through old Hadoop API (136 milliseconds)
[info] Test run finished: 0 failed, 0 ignored, 2 total, 0.188s
[info] Test run started
[info] Test org.apache.spark.network.ChunkFetchIntegrationSuite.fetchNonExistentChunk started
[info] Test org.apache.spark.network.ChunkFetchIntegrationSuite.fetchFileChunk started
[info] Test org.apache.spark.network.ChunkFetchIntegrationSuite.fetchBothChunks started
[info] Test org.apache.spark.network.ChunkFetchIntegrationSuite.fetchChunkAndNonExistent started
[info] Test org.apache.spark.network.ChunkFetchIntegrationSuite.fetchBufferChunk started
[info] - save Hadoop Dataset through new Hadoop API (141 milliseconds)
[info] Test run finished: 0 failed, 0 ignored, 5 total, 0.161s
[info] Test run started
[info] Test org.apache.spark.network.StreamSuite.testSingleStream started
[info] Test org.apache.spark.network.StreamSuite.testMultipleStreams started
[info] Test org.apache.spark.network.StreamSuite.testConcurrentStreams started
[info] - Get input files via old Hadoop API (207 milliseconds)
[info] Test org.apache.spark.network.StreamSuite.testZeroLengthStream started
[info] Test run finished: 0 failed, 0 ignored, 4 total, 0.247s
[info] - Get input files via new Hadoop API (222 milliseconds)
[info] - spark.files.ignoreCorruptFiles should work both HadoopRDD and NewHadoopRDD (403 milliseconds)
[info] - spark.hadoopRDD.ignoreEmptySplits work correctly (old Hadoop API) (463 milliseconds)
[info] - spark.hadoopRDD.ignoreEmptySplits work correctly (new Hadoop API) (464 milliseconds)
[info] KafkaRDDSuite:
[info] - spark.files.ignoreMissingFiles should work both HadoopRDD and NewHadoopRDD (450 milliseconds)
[info] LogPageSuite:
[info] - get logs simple (216 milliseconds)
[info] PartiallyUnrolledIteratorSuite:
[info] - join two iterators (40 milliseconds)
[info] HistoryServerDiskManagerSuite:
[info] - leasing space (146 milliseconds)
[info] - tracking active stores (33 milliseconds)
[info] - approximate size heuristic (1 millisecond)
[info] - repeatedly failing task (4 seconds, 45 milliseconds)
[info] - SPARK-32024: update ApplicationStoreInfo.size during initializing (40 milliseconds)
[info] ExternalShuffleServiceSuite:
[info] - groupByKey without compression (250 milliseconds)
[info] - shuffle non-zero block size (4 seconds, 912 milliseconds)
[info] - repeatedly failing task that crashes JVM (8 seconds, 339 milliseconds)
[info] - flume polling test multiple hosts (13 seconds, 629 milliseconds)
[info] - shuffle serializer (4 seconds, 503 milliseconds)
[info] FlumeStreamSuite:
[info] - flume input stream (1 second, 241 milliseconds)
[info] - flume input compressed stream (1 second, 121 milliseconds)
[info] Test run started
[info] Test org.apache.spark.streaming.flume.JavaFlumeStreamSuite.testFlumeStream started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.297s
[info] Test run started
[info] Test org.apache.spark.streaming.flume.JavaFlumePollingStreamSuite.testFlumeStream started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.131s
[info] LabelPropagationSuite:
[info] - zero sized blocks (7 seconds, 502 milliseconds)
[info] - repeatedly failing task that crashes JVM with a zero exit code (SPARK-16925) (12 seconds, 112 milliseconds)
[info] - Label Propagation (9 seconds, 805 milliseconds)
[info] BytecodeUtilsSuite:
[info] - closure invokes a method (12 milliseconds)
[info] - closure inside a closure invokes a method (4 milliseconds)
[info] - closure inside a closure inside a closure invokes a method (6 milliseconds)
[info] - closure calling a function that invokes a method (4 milliseconds)
[info] - closure calling a function that invokes a method which uses another closure (5 milliseconds)
[info] - nested closure (5 milliseconds)
[info] PregelSuite:
[info] - zero sized blocks without kryo (7 seconds, 136 milliseconds)
[info] - 1 iteration (910 milliseconds)
[info] - caching (encryption = off) (5 seconds, 614 milliseconds)
[info] - chain propagation (2 seconds, 783 milliseconds)
[info] PeriodicGraphCheckpointerSuite:
[info] - Persisting (164 milliseconds)
[info] - shuffle on mutable pairs (4 seconds, 605 milliseconds)
[info] - Checkpointing (3 seconds, 28 milliseconds)
[info] ConnectedComponentsSuite:
[info] - caching (encryption = on) (5 seconds, 40 milliseconds)
[info] - sorting on mutable pairs (4 seconds, 620 milliseconds)
[info] - Grid Connected Components (4 seconds, 173 milliseconds)
[info] - caching on disk (encryption = off) (4 seconds, 748 milliseconds)
[info] - cogroup using mutable pairs (4 seconds, 23 milliseconds)
[info] - Reverse Grid Connected Components (4 seconds, 94 milliseconds)
[info] - caching on disk (encryption = on) (5 seconds, 59 milliseconds)
[info] - subtract mutable pairs (4 seconds, 518 milliseconds)
[info] - Chain Connected Components (5 seconds, 190 milliseconds)
[info] - caching in memory, replicated (encryption = off) (4 seconds, 905 milliseconds)
[info] - sort with Java non serializable class - Kryo (5 seconds, 184 milliseconds)
[info] - Reverse Chain Connected Components (5 seconds, 923 milliseconds)
[info] - Connected Components on a Toy Connected Graph (907 milliseconds)
[info] VertexRDDSuite:
[info] - caching in memory, replicated (encryption = off) (with replication as stream) (5 seconds, 153 milliseconds)
[info] - filter (553 milliseconds)
[info] - sort with Java non serializable class - Java (4 seconds, 13 milliseconds)
[info] - mapValues (566 milliseconds)
[info] - minus (272 milliseconds)
[info] - shuffle with different compression settings (SPARK-3426) (612 milliseconds)
[info] - minus with RDD[(VertexId, VD)] (245 milliseconds)
[info] - [SPARK-4085] rerun map stage if reduce stage cannot find its local shuffle file (453 milliseconds)
[info] - cannot find its local shuffle file if no execution of the stage and rerun shuffle (111 milliseconds)
[info] - minus with non-equal number of partitions (536 milliseconds)
[info] - metrics for shuffle without aggregation (381 milliseconds)
[info] - diff (489 milliseconds)
[info] - diff with RDD[(VertexId, VD)] (557 milliseconds)
[info] - metrics for shuffle with aggregation (1 second, 34 milliseconds)
[info] - multiple simultaneous attempts for one task (SPARK-8029) (88 milliseconds)
[info] - diff vertices with non-equal number of partitions (428 milliseconds)
[info] - leftJoin (637 milliseconds)
[info] - leftJoin vertices with non-equal number of partitions (447 milliseconds)
[info] - caching in memory, replicated (encryption = on) (4 seconds, 589 milliseconds)
[info] - innerJoin (833 milliseconds)
[info] - innerJoin vertices with the non-equal number of partitions (414 milliseconds)
[info] - aggregateUsingIndex (535 milliseconds)
[info] - mergeFunc (176 milliseconds)
[info] - cache, getStorageLevel (71 milliseconds)
[info] - checkpoint (918 milliseconds)
[info] - count (701 milliseconds)
[info] EdgePartitionSuite:
[info] - reverse (8 milliseconds)
[info] - map (3 milliseconds)
[info] - filter (6 milliseconds)
[info] - groupEdges (3 milliseconds)
[info] - innerJoin (3 milliseconds)
[info] - isActive, numActives, replaceActives (1 millisecond)
[info] - tripletIterator (2 milliseconds)
[info] - serialization (26 milliseconds)
[info] EdgeSuite:
[info] - compare (2 milliseconds)
[info] PageRankSuite:
[info] - using external shuffle service (5 seconds, 263 milliseconds)
[info] ConfigEntrySuite:
[info] - conf entry: int (1 millisecond)
[info] - conf entry: long (0 milliseconds)
[info] - conf entry: double (0 milliseconds)
[info] - conf entry: boolean (0 milliseconds)
[info] - conf entry: optional (0 milliseconds)
[info] - conf entry: fallback (1 millisecond)
[info] - conf entry: time (0 milliseconds)
[info] - conf entry: bytes (0 milliseconds)
[info] - conf entry: regex (1 millisecond)
[info] - conf entry: string seq (0 milliseconds)
[info] - conf entry: int seq (1 millisecond)
[info] - conf entry: transformation (1 millisecond)
[info] - conf entry: checkValue() (2 milliseconds)
[info] - conf entry: valid values check (1 millisecond)
[info] - conf entry: conversion error (1 millisecond)
[info] - default value handling is null-safe (1 millisecond)
[info] - variable expansion of spark config entries (8 milliseconds)
[info] - conf entry : default function (1 millisecond)
[info] - conf entry: alternative keys (1 millisecond)
[info] - onCreate (2 milliseconds)
[info] InputOutputMetricsSuite:
[info] - input metrics for old hadoop with coalesce (224 milliseconds)
[info] - input metrics with cache and coalesce (152 milliseconds)
[info] - input metrics for new Hadoop API with coalesce (100 milliseconds)
[info] - input metrics when reading text file (56 milliseconds)
[info] - input metrics on records read - simple (57 milliseconds)
[info] - input metrics on records read - more stages (190 milliseconds)
[info] - input metrics on records - New Hadoop API (35 milliseconds)
[info] - Star PageRank (1 second, 843 milliseconds)
[info] - input metrics on records read with cache (111 milliseconds)
[info] - input read/write and shuffle read/write metrics all line up (158 milliseconds)
[info] - caching in memory, replicated (encryption = on) (with replication as stream) (5 seconds, 407 milliseconds)
[info] - input metrics with interleaved reads (341 milliseconds)
[info] - output metrics on records written (115 milliseconds)
[info] - output metrics on records written - new Hadoop API (78 milliseconds)
[info] - output metrics when writing text file (102 milliseconds)
[info] - input metrics with old CombineFileInputFormat (34 milliseconds)
[info] - input metrics with new CombineFileInputFormat (59 milliseconds)
[info] - input metrics with old Hadoop API in different thread (73 milliseconds)
[info] - input metrics with new Hadoop API in different thread (81 milliseconds)
[info] AppStatusStoreSuite:
[info] - quantile calculation: 1 task (28 milliseconds)
[info] - quantile calculation: few tasks (5 milliseconds)
[info] - quantile calculation: more tasks (18 milliseconds)
[info] - quantile calculation: lots of tasks (95 milliseconds)
[info] - quantile calculation: custom quantiles (49 milliseconds)
[info] - quantile cache (126 milliseconds)
[info] - SPARK-28638: only successful tasks have taskSummary when with in memory kvstore (3 milliseconds)
[info] - SPARK-28638: summary should contain successful tasks only when with in memory kvstore (12 milliseconds)
[info] CountEvaluatorSuite:
[info] - test count 0 (2 milliseconds)
[info] - test count >= 1 (29 milliseconds)
[info] TaskResultGetterSuite:
[info] - handling results smaller than max RPC message size (61 milliseconds)
[info] - handling results larger than max RPC message size (326 milliseconds)
[info] - handling total size of results larger than maxResultSize (103 milliseconds)
[info] - task retried if result missing from block manager (408 milliseconds)
[info] - failed task deserialized with the correct classloader (SPARK-11195) (442 milliseconds)
[info] - task result size is set on the driver, not the executors (88 milliseconds)
[info] - failed task is handled when error occurs deserializing the reason (81 milliseconds)
Exception in thread "task-result-getter-0" java.lang.NoClassDefFoundError
	at org.apache.spark.scheduler.UndeserializableException.readObject(TaskResultGetterSuite.scala:304)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1184)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2296)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2187)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:503)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:461)
	at org.apache.spark.ThrowableSerializationWrapper.readObject(TaskEndReason.scala:193)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1184)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2296)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2187)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2405)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2329)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2187)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2405)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2329)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2187)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:503)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:461)
	at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
	at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
	at org.apache.spark.scheduler.TaskResultGetter$$anon$4$$anonfun$run$2.apply$mcV$sp(TaskResultGetter.scala:142)
	at org.apache.spark.scheduler.TaskResultGetter$$anon$4$$anonfun$run$2.apply(TaskResultGetter.scala:138)
	at org.apache.spark.scheduler.TaskResultGetter$$anon$4$$anonfun$run$2.apply(TaskResultGetter.scala:138)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1945)
	at org.apache.spark.scheduler.TaskResultGetter$$anon$4.run(TaskResultGetter.scala:138)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
[info] SerializerPropertiesSuite:
[info] - JavaSerializer does not support relocation (2 milliseconds)
[info] - Star PersonalPageRank (3 seconds, 377 milliseconds)
[info] - KryoSerializer supports relocation when auto-reset is enabled (97 milliseconds)
[info] - KryoSerializer does not support relocation when auto-reset is disabled (12 milliseconds)
[info] DriverRunnerTest:
[info] - Process succeeds instantly (65 milliseconds)
[info] - Process failing several times and then succeeding (32 milliseconds)
[info] - Process doesn't restart if not supervised (28 milliseconds)
[info] - Process doesn't restart if killed (48 milliseconds)
[info] - Reset of backoff counter (34 milliseconds)
[info] - Kill process finalized with state KILLED (37 milliseconds)
[info] - Finalized with state FINISHED (36 milliseconds)
[info] - Finalized with state FAILED (36 milliseconds)
[info] - Handle exception starting process (43 milliseconds)
[info] MapOutputTrackerSuite:
[info] - master start and stop (93 milliseconds)
[info] - master register shuffle and fetch (83 milliseconds)
[info] - master register and unregister shuffle (80 milliseconds)
[info] - master register shuffle and unregister map output and fetch (97 milliseconds)
[info] - remote fetch (218 milliseconds)
[info] - caching in memory, serialized, replicated (encryption = off) (4 seconds, 88 milliseconds)
[info] - remote fetch below max RPC message size (98 milliseconds)
[info] - min broadcast size exceeds max RPC message size (25 milliseconds)
[info] - getLocationsWithLargestOutputs with multiple outputs in same machine (63 milliseconds)
[info] - caching in memory, serialized, replicated (encryption = off) (with replication as stream) (4 seconds, 300 milliseconds)
[info] - caching in memory, serialized, replicated (encryption = on) (4 seconds, 320 milliseconds)
[info] - Grid PageRank (11 seconds, 9 milliseconds)
[info] - Chain PageRank (3 seconds, 228 milliseconds)
[info] - caching in memory, serialized, replicated (encryption = on) (with replication as stream) (4 seconds, 552 milliseconds)
[info] - remote fetch using broadcast (16 seconds, 534 milliseconds)
[info] - equally divide map statistics tasks (39 milliseconds)
[info] - Chain PersonalizedPageRank (3 seconds, 560 milliseconds)
[info] - zero-sized blocks should be excluded when getMapSizesByExecutorId (133 milliseconds)
[info] - caching on disk, replicated (encryption = off) (4 seconds, 127 milliseconds)
[info] - caching on disk, replicated (encryption = off) (with replication as stream) (3 seconds, 866 milliseconds)
[info] - caching on disk, replicated (encryption = on) (4 seconds, 786 milliseconds)
[info] - caching on disk, replicated (encryption = on) (with replication as stream) (4 seconds, 498 milliseconds)
[info] - Loop with source PageRank (14 seconds, 947 milliseconds)
[info] - caching in memory and disk, replicated (encryption = off) (4 seconds, 130 milliseconds)
[info] - caching in memory and disk, replicated (encryption = off) (with replication as stream) (4 seconds, 76 milliseconds)
[info] - caching in memory and disk, replicated (encryption = on) (4 seconds, 194 milliseconds)
[info] - Loop with sink PageRank (13 seconds, 169 milliseconds)
[info] EdgeRDDSuite:
[info] - cache, getStorageLevel (71 milliseconds)
[info] - checkpointing (269 milliseconds)
[info] - count (104 milliseconds)
[info] GraphSuite:
[info] - Graph.fromEdgeTuples (235 milliseconds)
[info] - Graph.fromEdges (112 milliseconds)
[info] - SPARK-34939: remote fetch using broadcast if broadcasted value is destroyed (28 seconds, 923 milliseconds)
[info] PythonBroadcastSuite:
[info] - PythonBroadcast can be serialized with Kryo (SPARK-4882) (19 milliseconds)
[info] ExecutorRunnerTest:
[info] - Graph.apply (306 milliseconds)
[info] - command includes appId (27 milliseconds)
[info] CompressionCodecSuite:
[info] - default compression codec (5 milliseconds)
[info] - lz4 compression codec (1 millisecond)
[info] - lz4 compression codec short form (1 millisecond)
[info] - lz4 supports concatenation of serialized streams (2 milliseconds)
[info] - lzf compression codec (10 milliseconds)
[info] - lzf compression codec short form (3 milliseconds)
[info] - lzf supports concatenation of serialized streams (1 millisecond)
[info] - snappy compression codec (27 milliseconds)
[info] - snappy compression codec short form (2 milliseconds)
[info] - snappy supports concatenation of serialized streams (1 millisecond)
[info] - zstd compression codec (26 milliseconds)
[info] - zstd compression codec short form (1 millisecond)
[info] - zstd supports concatenation of serialized zstd (1 millisecond)
[info] - bad compression codec (2 milliseconds)
[info] MetricsSystemSuite:
[info] - MetricsSystem with default config (2 milliseconds)
[info] - MetricsSystem with sources add (6 milliseconds)
[info] - MetricsSystem with Driver instance (1 millisecond)
[info] - MetricsSystem with Driver instance and spark.app.id is not set (2 milliseconds)
[info] - MetricsSystem with Driver instance and spark.executor.id is not set (3 milliseconds)
[info] - MetricsSystem with Executor instance (1 millisecond)
[info] - MetricsSystem with Executor instance and spark.app.id is not set (1 millisecond)
[info] - triplets (338 milliseconds)
[info] - MetricsSystem with Executor instance and spark.executor.id is not set (1 millisecond)
[info] - MetricsSystem with instance which is neither Driver nor Executor (1 millisecond)
[info] - MetricsSystem with Executor instance, with custom namespace (1 millisecond)
[info] - MetricsSystem with Executor instance, custom namespace which is not set (1 millisecond)
[info] - MetricsSystem with Executor instance, custom namespace, spark.executor.id not set (2 milliseconds)
[info] - MetricsSystem with non-driver, non-executor instance with custom namespace (2 milliseconds)
[info] ConfigReaderSuite:
[info] - variable expansion (3 milliseconds)
[info] - circular references (1 millisecond)
[info] - spark conf provider filters config keys (0 milliseconds)
[info] DoubleRDDSuite:
[info] - sum (62 milliseconds)
[info] - WorksOnEmpty (44 milliseconds)
[info] - WorksWithOutOfRangeWithOneBucket (35 milliseconds)
[info] - WorksInRangeWithOneBucket (38 milliseconds)
[info] - WorksInRangeWithOneBucketExactMatch (37 milliseconds)
[info] - WorksWithOutOfRangeWithTwoBuckets (33 milliseconds)
[info] - WorksWithOutOfRangeWithTwoUnEvenBuckets (18 milliseconds)
[info] - WorksInRangeWithTwoBuckets (38 milliseconds)
[info] - WorksInRangeWithTwoBucketsAndNaN (126 milliseconds)
[info] - WorksInRangeWithTwoUnevenBuckets (19 milliseconds)
[info] - WorksMixedRangeWithTwoUnevenBuckets (18 milliseconds)
[info] - WorksMixedRangeWithFourUnevenBuckets (16 milliseconds)
[info] - WorksMixedRangeWithUnevenBucketsAndNaN (15 milliseconds)
[info] - WorksMixedRangeWithUnevenBucketsAndNaNAndNaNRange (16 milliseconds)
[info] - WorksMixedRangeWithUnevenBucketsAndNaNAndNaNRangeAndInfinity (14 milliseconds)
[info] - WorksWithOutOfRangeWithInfiniteBuckets (14 milliseconds)
[info] - ThrowsExceptionOnInvalidBucketArray (2 milliseconds)
[info] - WorksWithoutBucketsBasic (31 milliseconds)
[info] - WorksWithoutBucketsBasicSingleElement (24 milliseconds)
[info] - WorksWithoutBucketsBasicNoRange (23 milliseconds)
[info] - WorksWithoutBucketsBasicTwo (27 milliseconds)
[info] - WorksWithDoubleValuesAtMinMax (58 milliseconds)
[info] - WorksWithoutBucketsWithMoreRequestedThanElements (29 milliseconds)
[info] - WorksWithoutBucketsForLargerDatasets (33 milliseconds)
[info] - WorksWithoutBucketsWithNonIntegralBucketEdges (33 milliseconds)
[info] - WorksWithHugeRange (511 milliseconds)
[info] - ThrowsExceptionOnInvalidRDDs (39 milliseconds)
[info] NextIteratorSuite:
[info] - one iteration (3 milliseconds)
[info] - two iterations (1 millisecond)
[info] - empty iteration (1 millisecond)
[info] - close is called once for empty iterations (0 milliseconds)
[info] - close is called once for non-empty iterations (1 millisecond)
[info] SparkSubmitSuite:
[info] - prints usage on empty input (23 milliseconds)
[info] - prints usage with only --help (2 milliseconds)
[info] - prints error with unrecognized options (2 milliseconds)
[info] - caching in memory and disk, replicated (encryption = on) (with replication as stream) (4 seconds, 536 milliseconds)
[info] - handle binary specified but not class (158 milliseconds)
[info] - handles arguments with --key=val (3 milliseconds)
[info] - handles arguments to user program (1 millisecond)
[info] - handles arguments to user program with name collision (1 millisecond)
[info] - print the right queue name (8 milliseconds)
[info] - SPARK-24241: do not fail fast if executor num is 0 when dynamic allocation is enabled (2 milliseconds)
[info] - specify deploy mode through configuration (286 milliseconds)
[info] - handles YARN cluster mode (22 milliseconds)
[info] - handles YARN client mode (35 milliseconds)
[info] - handles standalone cluster mode (14 milliseconds)
[info] - handles legacy standalone cluster mode (13 milliseconds)
[info] - handles standalone client mode (35 milliseconds)
[info] - handles mesos client mode (34 milliseconds)
[info] - handles k8s cluster mode (16 milliseconds)
[info] - handles confs with flag equivalents (15 milliseconds)
[info] - SPARK-21568 ConsoleProgressBar should be enabled only in shells (69 milliseconds)
[info] - caching in memory and disk, serialized, replicated (encryption = off) (3 seconds, 829 milliseconds)
[info] - launch simple application with spark-submit (5 seconds, 141 milliseconds)
[info] - partitionBy (9 seconds, 215 milliseconds)
[info] - mapVertices (354 milliseconds)
[info] - caching in memory and disk, serialized, replicated (encryption = off) (with replication as stream) (4 seconds, 40 milliseconds)
[info] - mapVertices changing type with same erased type (314 milliseconds)
[info] - mapEdges (247 milliseconds)
[info] - mapTriplets (408 milliseconds)
[info] - reverse (289 milliseconds)
[info] - reverse with join elimination (252 milliseconds)
[info] - subgraph (366 milliseconds)
[info] - mask (242 milliseconds)
[info] - groupEdges (407 milliseconds)
[info] - aggregateMessages (477 milliseconds)
[info] - outerJoinVertices (642 milliseconds)
[info] - launch simple application with spark-submit with redaction (5 seconds, 690 milliseconds)
[info] - more edge partitions than vertex partitions (256 milliseconds)
[info] - caching in memory and disk, serialized, replicated (encryption = on) (4 seconds, 39 milliseconds)
[info] - checkpoint (372 milliseconds)
[info] - cache, getStorageLevel (88 milliseconds)
[info] - non-default number of edge partitions (300 milliseconds)
[info] - unpersist graph RDD (527 milliseconds)
[info] - SPARK-14219: pickRandomVertex (204 milliseconds)
[info] ShortestPathsSuite:
[info] - basic usage (2 minutes, 5 seconds)
[info] - Shortest Path Computations (766 milliseconds)
[info] GraphOpsSuite:
[info] - joinVertices (268 milliseconds)
[info] - collectNeighborIds (464 milliseconds)
[info] - removeSelfEdges (218 milliseconds)
[info] - filter (312 milliseconds)
[info] - convertToCanonicalEdges (240 milliseconds)
[info] - collectEdgesCycleDirectionOut (452 milliseconds)
[info] - collectEdgesCycleDirectionIn (495 milliseconds)
[info] - caching in memory and disk, serialized, replicated (encryption = on) (with replication as stream) (4 seconds, 653 milliseconds)
[info] - collectEdgesCycleDirectionEither (468 milliseconds)
[info] - collectEdgesChainDirectionOut (480 milliseconds)
[info] - collectEdgesChainDirectionIn (443 milliseconds)
[info] - collectEdgesChainDirectionEither (431 milliseconds)
[info] StronglyConnectedComponentsSuite:
[info] - Island Strongly Connected Components (929 milliseconds)
[info] - compute without caching when no partitions fit in memory (4 seconds, 109 milliseconds)
[info] - Cycle Strongly Connected Components (3 seconds, 147 milliseconds)
[info] - includes jars passed in through --jars (10 seconds, 822 milliseconds)
[info] - 2 Cycle Strongly Connected Components (2 seconds, 259 milliseconds)
[info] VertexPartitionSuite:
[info] - isDefined, filter (6 milliseconds)
[info] - map (1 millisecond)
[info] - diff (1 millisecond)
[info] - leftJoin (4 milliseconds)
[info] - innerJoin (3 milliseconds)
[info] - createUsingIndex (1 millisecond)
[info] - innerJoinKeepLeft (1 millisecond)
[info] - aggregateUsingIndex (0 milliseconds)
[info] - reindex (2 milliseconds)
[info] - serialization (15 milliseconds)
[info] GraphLoaderSuite:
[info] - GraphLoader.edgeListFile (465 milliseconds)
[info] TriangleCountSuite:
[info] - compute when only some partitions fit in memory (4 seconds, 621 milliseconds)
[info] - Count a single triangle (748 milliseconds)
[info] - Count two triangles (642 milliseconds)
[info] - Count two triangles with bi-directed edges (589 milliseconds)
[info] - Count a single triangle with duplicate edges (601 milliseconds)
[info] GraphGeneratorsSuite:
[info] - GraphGenerators.generateRandomEdges (3 milliseconds)
[info] - GraphGenerators.sampleLogNormal (7 milliseconds)
[info] - GraphGenerators.logNormalGraph (380 milliseconds)
[info] - SPARK-5064 GraphGenerators.rmatGraph numEdges upper bound (178 milliseconds)
[info] SVDPlusPlusSuite:
[info] - passing environment variables to cluster (3 seconds, 596 milliseconds)
[info] - Test SVD++ with mean square error on training set (1 second, 195 milliseconds)
[info] - Test SVD++ with no edges (260 milliseconds)
[info] ReliableKafkaStreamSuite:
[info] - includes jars passed in through --packages (12 seconds, 141 milliseconds)
[info] - Reliable Kafka input stream with single topic (2 seconds, 264 milliseconds)
[info] - recover from node failures (7 seconds, 189 milliseconds)
[info] - Reliable Kafka input stream with multiple topics (807 milliseconds)
[info] KafkaStreamSuite:
[info] - Kafka input stream (1 second, 408 milliseconds)
[info] DirectKafkaStreamSuite:
[info] - basic stream receiving with multiple topics and smallest starting offset (967 milliseconds)
[info] - receiving from largest starting offset (296 milliseconds)
[info] - creating stream by offset (298 milliseconds)
[info] - offset recovery (3 seconds, 78 milliseconds)
[info] - includes jars passed through spark.jars.packages and spark.jars.repositories (11 seconds, 156 milliseconds)
[info] - correctly builds R packages included in a jar with --packages !!! IGNORED !!!
[info] - Direct Kafka stream report input information (465 milliseconds)
[info] - maxMessagesPerPartition with backpressure disabled (110 milliseconds)
[info] - maxMessagesPerPartition with no lag (113 milliseconds)
[info] - maxMessagesPerPartition respects max rate (117 milliseconds)
[info] - recover from repeated node failures during shuffle-map (9 seconds, 823 milliseconds)
[info] - using rate controller (713 milliseconds)
[info] - use backpressure.initialRate with backpressure (275 milliseconds)
[info] - backpressure.initialRate should honor maxRatePerPartition (245 milliseconds)
[info] - maxMessagesPerPartition with zero offset and rate equal to one (93 milliseconds)
[info] KafkaRDDSuite:
[info] - basic usage (217 milliseconds)
[info] - iterator boundary conditions (215 milliseconds)
[info] KafkaClusterSuite:
[info] - metadata apis (11 milliseconds)
[info] - leader offset apis (4 milliseconds)
[info] - consumer offset apis (7 milliseconds)
[info] Test run started
[info] Test org.apache.spark.streaming.kafka.JavaKafkaStreamSuite.testKafkaStream started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 1.763s
[info] Test run started
[info] Test org.apache.spark.streaming.kafka.JavaKafkaRDDSuite.testKafkaRDD started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 1.045s
[info] Test run started
[info] Test org.apache.spark.streaming.kafka.JavaDirectKafkaStreamSuite.testKafkaStream started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 1.162s
[info] MesosSchedulerUtilsSuite:
[info] - use at-least minimum overhead (248 milliseconds)
[info] - use overhead if it is greater than minimum value (3 milliseconds)
[info] - use spark.mesos.executor.memoryOverhead (if set) (3 milliseconds)
[info] - parse a non-empty constraint string correctly (20 milliseconds)
[info] - parse an empty constraint string correctly (0 milliseconds)
[info] - throw an exception when the input is malformed (6 milliseconds)
[info] - empty values for attributes' constraints matches all values (27 milliseconds)
[info] - subset match is performed for set attributes (5 milliseconds)
[info] - less than equal match is performed on scalar attributes (4 milliseconds)
[info] - contains match is performed for range attributes (34 milliseconds)
[info] - equality match is performed for text attributes (1 millisecond)
[info] - Port reservation is done correctly with user specified ports only (30 milliseconds)
[info] - Port reservation is done correctly with all random ports (3 milliseconds)
[info] - Port reservation is done correctly with user specified ports only - multiple ranges (2 milliseconds)
[info] - Port reservation is done correctly with all random ports - multiple ranges (2 milliseconds)
[info] - Principal specified via spark.mesos.principal (16 milliseconds)
[info] - Principal specified via spark.mesos.principal.file (15 milliseconds)
[info] - Principal specified via spark.mesos.principal.file that does not exist (2 milliseconds)
[info] - Principal specified via SPARK_MESOS_PRINCIPAL (1 millisecond)
[info] - Principal specified via SPARK_MESOS_PRINCIPAL_FILE (1 millisecond)
[info] - Principal specified via SPARK_MESOS_PRINCIPAL_FILE that does not exist (2 milliseconds)
[info] - Secret specified via spark.mesos.secret (1 millisecond)
[info] - Principal specified via spark.mesos.secret.file (1 millisecond)
[info] - Principal specified via spark.mesos.secret.file that does not exist (1 millisecond)
[info] - Principal specified via SPARK_MESOS_SECRET (1 millisecond)
[info] - Principal specified via SPARK_MESOS_SECRET_FILE (1 millisecond)
[info] - Secret specified with no principal (1 millisecond)
[info] - Principal specification preference (1 millisecond)
[info] - Secret specification preference (1 millisecond)
[info] MesosSchedulerBackendUtilSuite:
[info] - ContainerInfo fails to parse invalid docker parameters (70 milliseconds)
[info] - ContainerInfo parses docker parameters (1 millisecond)
[info] - SPARK-28778 ContainerInfo respects Docker network configuration (19 milliseconds)
[info] MesosFineGrainedSchedulerBackendSuite:
[info] - weburi is set in created scheduler driver (41 milliseconds)
[info] - Use configured mesosExecutor.cores for ExecutorInfo (43 milliseconds)
[info] - check spark-class location correctly (8 milliseconds)
[info] - spark docker properties correctly populate the DockerInfo message (13 milliseconds)
[info] - mesos resource offers result in launching tasks (57 milliseconds)
[info] - can handle multiple roles (6 milliseconds)
[info] MesosCoarseGrainedSchedulerBackendSuite:
[info] - include an external JAR in SparkR (10 seconds, 791 milliseconds)
[info] - resolves command line argument paths correctly (138 milliseconds)
[info] - ambiguous archive mapping results in error message (19 milliseconds)
[info] - mesos supports killing and limiting executors (1 second, 409 milliseconds)
[info] - resolves config paths correctly (135 milliseconds)
[info] - mesos supports killing and relaunching tasks with executors (129 milliseconds)
[info] - mesos supports spark.executor.cores (116 milliseconds)
[info] - mesos supports unset spark.executor.cores (114 milliseconds)
[info] - mesos does not acquire more than spark.cores.max (106 milliseconds)
[info] - mesos does not acquire gpus if not specified (83 milliseconds)
[info] - mesos does not acquire more than spark.mesos.gpus.max (77 milliseconds)
[info] - mesos declines offers that violate attribute constraints (116 milliseconds)
[info] - mesos declines offers with a filter when reached spark.cores.max (89 milliseconds)
[info] - mesos declines offers with a filter when maxCores not a multiple of executor.cores (99 milliseconds)
[info] - mesos declines offers with a filter when reached spark.cores.max with executor.cores (86 milliseconds)
[info] - mesos assigns tasks round-robin on offers (73 milliseconds)
[info] - mesos creates multiple executors on a single slave (86 milliseconds)
[info] - mesos doesn't register twice with the same shuffle service (81 milliseconds)
[info] - Port offer decline when there is no appropriate range (101 milliseconds)
[info] - Port offer accepted when ephemeral ports are used (112 milliseconds)
[info] - Port offer accepted with user defined port numbers (87 milliseconds)
[info] - mesos kills an executor when told (98 milliseconds)
[info] - weburi is set in created scheduler driver (72 milliseconds)
[info] - failover timeout is set in created scheduler driver (73 milliseconds)
[info] - honors unset spark.mesos.containerizer (86 milliseconds)
[info] - user classpath first in driver (2 seconds, 591 milliseconds)
[info] - SPARK_CONF_DIR overrides spark-defaults.conf (7 milliseconds)
[info] - support glob path (42 milliseconds)
[info] - honors spark.mesos.containerizer="mesos" (91 milliseconds)
[info] - SPARK-27575: yarn confs should merge new value with existing value (79 milliseconds)
[info] - downloadFile - invalid url (42 milliseconds)
[info] - downloadFile - file doesn't exist (52 milliseconds)
[info] - docker settings are reflected in created tasks (93 milliseconds)
[info] - downloadFile does not download local file (24 milliseconds)
[info] - download one file to local (32 milliseconds)
[info] - download list of files to local (32 milliseconds)
[info] - force-pull-image option is disabled by default (67 milliseconds)
[info] - remove copies of application jar from classpath (61 milliseconds)
[info] - Avoid re-upload remote resources in yarn client mode (44 milliseconds)
[info] - mesos supports spark.executor.uri (77 milliseconds)
[info] - download remote resource if it is not supported by yarn service (54 milliseconds)
[info] - avoid downloading remote resource if it is supported by yarn service (40 milliseconds)
[info] - force download from blacklisted schemes (39 milliseconds)
[info] - mesos supports setting fetcher cache (91 milliseconds)
[info] - force download for all the schemes (40 milliseconds)
[info] - start SparkApplication without modifying system properties (38 milliseconds)
[info] - mesos supports disabling fetcher cache (64 milliseconds)
[info] - support --py-files/spark.submit.pyFiles in non pyspark application (66 milliseconds)
[info] - handles natural line delimiters in --properties-file and --conf uniformly (38 milliseconds)
[info] - mesos sets task name to spark.app.name (60 milliseconds)
[info] NettyRpcEnvSuite:
[info] - send a message locally (2 milliseconds)
[info] - send a message remotely (46 milliseconds)
[info] - send a RpcEndpointRef (1 millisecond)
[info] - ask a message locally (1 millisecond)
[info] - mesos sets configurable labels on tasks (71 milliseconds)
[info] - ask a message remotely (48 milliseconds)
[info] - ask a message timeout (42 milliseconds)
[info] - onStart and onStop (1 millisecond)
[info] - onError: error in onStart (2 milliseconds)
[info] - onError: error in onStop (1 millisecond)
[info] - onError: error in receive (1 millisecond)
[info] - self: call in onStart (1 millisecond)
[info] - self: call in receive (1 millisecond)
[info] - self: call in onStop (2 milliseconds)
[info] - mesos supports spark.mesos.network.name and spark.mesos.network.labels (115 milliseconds)
[info] - call receive in sequence (382 milliseconds)
[info] - stop(RpcEndpointRef) reentrant (1 millisecond)
[info] - sendWithReply (1 millisecond)
[info] - sendWithReply: remotely (44 milliseconds)
[info] - sendWithReply: error (2 milliseconds)
[info] - SPARK-28778 '--hostname' shouldn't be set for executor when virtual network is enabled (364 milliseconds)
[info] - sendWithReply: remotely error (69 milliseconds)
[info] - supports spark.scheduler.minRegisteredResourcesRatio (127 milliseconds)
[info] - network events in sever RpcEnv when another RpcEnv is in server mode (115 milliseconds)
[info] - network events in sever RpcEnv when another RpcEnv is in client mode (77 milliseconds)
[info] - network events in client RpcEnv when another RpcEnv is in server mode (123 milliseconds)
[info] - sendWithReply: unserializable error (52 milliseconds)
[info] - port conflict (47 milliseconds)
[info] - send with authentication (154 milliseconds)
[info] - send with SASL encryption (134 milliseconds)
[info] - send with AES encryption (203 milliseconds)
[info] - ask with authentication (90 milliseconds)
[info] - ask with SASL encryption (166 milliseconds)
[info] - ask with AES encryption (95 milliseconds)
[info] - construct RpcTimeout with conf property (1 millisecond)
[info] - ask a message timeout on Future using RpcTimeout (24 milliseconds)
[info] - file server (75 milliseconds)
[info] - SPARK-14699: RpcEnv.shutdown should not fire onDisconnected events (42 milliseconds)
[info] - non-existent endpoint (1 millisecond)
[info] - advertise address different from bind address (47 milliseconds)
[info] - RequestMessage serialization (7 milliseconds)
Exception in thread "dispatcher-event-loop-1" java.lang.StackOverflowError
	at org.apache.spark.rpc.netty.NettyRpcEnvSuite$$anonfun$5$$anon$1$$anonfun$receiveAndReply$1.applyOrElse(NettyRpcEnvSuite.scala:113)
	at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:105)
	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:215)
	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:101)
	at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:221)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Exception in thread "dispatcher-event-loop-0" java.lang.StackOverflowError
	at org.apache.spark.rpc.netty.NettyRpcEnvSuite$$anonfun$5$$anon$1$$anonfun$receiveAndReply$1.applyOrElse(NettyRpcEnvSuite.scala:113)
	at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:105)
	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:215)
	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:101)
	at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:221)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
[info] - StackOverflowError should be sent back and Dispatcher should survive (41 milliseconds)
[info] JsonProtocolSuite:
[info] - SparkListenerEvent (245 milliseconds)
[info] - Dependent Classes (19 milliseconds)
[info] - ExceptionFailure backward compatibility: full stack trace (2 milliseconds)
[info] - StageInfo backward compatibility (details, accumulables) (2 milliseconds)
[info] - InputMetrics backward compatibility (1 millisecond)
[info] - Input/Output records backwards compatibility (1 millisecond)
[info] - Shuffle Read/Write records backwards compatibility (2 milliseconds)
[info] - OutputMetrics backward compatibility (1 millisecond)
[info] - BlockManager events backward compatibility (1 millisecond)
[info] - FetchFailed backwards compatibility (0 milliseconds)
[info] - ShuffleReadMetrics: Local bytes read backwards compatibility (1 millisecond)
[info] - SparkListenerApplicationStart backwards compatibility (1 millisecond)
[info] - ExecutorLostFailure backward compatibility (1 millisecond)
[info] - SparkListenerJobStart backward compatibility (3 milliseconds)
[info] - SparkListenerJobStart and SparkListenerJobEnd backward compatibility (3 milliseconds)
[info] - RDDInfo backward compatibility (scope, parent IDs, callsite) (2 milliseconds)
[info] - StageInfo backward compatibility (parent IDs) (1 millisecond)
[info] - TaskCommitDenied backward compatibility (1 millisecond)
[info] - AccumulableInfo backward compatibility (1 millisecond)
[info] - ExceptionFailure backward compatibility: accumulator updates (9 milliseconds)
[info] - AccumulableInfo value de/serialization (2 milliseconds)
[info] - SPARK-31923: unexpected value type of internal accumulator (2 milliseconds)
[info] RPackageUtilsSuite:
[info] - pick which jars to unpack using the manifest (378 milliseconds)
[info] - build an R package from a jar end to end (2 seconds, 961 milliseconds)
[info] - jars that don't exist are skipped and print warning (372 milliseconds)
[info] - faulty R package shows documentation (367 milliseconds)
[info] - jars without manifest return false (121 milliseconds)
[info] - SparkR zipping works properly (14 milliseconds)
[info] TopologyMapperSuite:
[info] - File based Topology Mapper (8 milliseconds)
[info] EventLoggingListenerSuite:
[info] - Verify log file exist (33 milliseconds)
[info] - supports data locality with dynamic allocation (6 seconds, 84 milliseconds)
[info] - Basic event logging (88 milliseconds)
[info] - Creates an env-based reference secrets. (90 milliseconds)
[info] - Creates an env-based value secrets. (66 milliseconds)
[info] - Basic event logging with compression (193 milliseconds)
[info] - Creates file-based reference secrets. (103 milliseconds)
[info] - Creates a file-based value secrets. (68 milliseconds)
[info] MesosClusterSchedulerSuite:
[info] - can queue drivers (33 milliseconds)
[info] - can kill queued drivers (25 milliseconds)
[info] - can handle multiple roles (52 milliseconds)
[info] - escapes commandline args for the shell (47 milliseconds)
[info] - supports spark.mesos.driverEnv.* (27 milliseconds)
[info] - supports spark.mesos.network.name and spark.mesos.network.labels (26 milliseconds)
[info] - supports setting fetcher cache on the dispatcher (26 milliseconds)
[info] - supports setting fetcher cache in the submission (26 milliseconds)
[info] - supports disabling fetcher cache (25 milliseconds)
[info] - accept/decline offers with driver constraints (38 milliseconds)
[info] - supports spark.mesos.driver.labels (28 milliseconds)
[info] - can kill supervised drivers (40 milliseconds)
[info] - SPARK-27347: do not restart outdated supervised drivers (1 second, 552 milliseconds)
[info] - Declines offer with refuse seconds = 120. (24 milliseconds)
[info] - Creates an env-based reference secrets. (50 milliseconds)
[info] - Creates an env-based value secrets. (34 milliseconds)
[info] - Creates file-based reference secrets. (28 milliseconds)
[info] - Creates a file-based value secrets. (22 milliseconds)
[info] MesosClusterDispatcherSuite:
[info] - prints usage on empty input (11 milliseconds)
[info] - prints usage with only --help (1 millisecond)
[info] - prints error with unrecognized options (1 millisecond)
[info] MesosClusterManagerSuite:
[info] - mesos fine-grained (57 milliseconds)
[info] - mesos coarse-grained (66 milliseconds)
[info] - mesos with zookeeper (67 milliseconds)
[info] - mesos with i/o encryption throws error (113 milliseconds)
[info] MesosClusterDispatcherArgumentsSuite:
(spark.testing,true)
(spark.ui.showConsoleProgress,false)
(spark.master.rest.enabled,false)
(spark.test.home,/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6)
(spark.ui.enabled,false)
(spark.unsafe.exceptionOnMemoryLeak,true)
(spark.mesos.key2,value2)
(spark.memory.debugFill,true)
(spark.port.maxRetries,100)
[info] - test if spark config args are passed successfully (11 milliseconds)
(spark.testing,true)
(spark.ui.showConsoleProgress,false)
(spark.master.rest.enabled,false)
(spark.test.home,/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6)
(spark.ui.enabled,false)
(spark.unsafe.exceptionOnMemoryLeak,true)
(spark.memory.debugFill,true)
(spark.port.maxRetries,100)
[info] - test non conf settings (2 milliseconds)
[info] MesosProtoUtilsSuite:
[info] - mesosLabels (1 millisecond)
[info] - recover from repeated node failures during shuffle-reduce (23 seconds, 966 milliseconds)
[info] - End-to-end event logging (3 seconds, 702 milliseconds)
[info] ExecutorPodsSnapshotSuite:
[info] - States are interpreted correctly from pod metadata. (214 milliseconds)
[info] - Updates add new pods for non-matching ids and edit existing pods for matching ids (6 milliseconds)
[info] EnvSecretsFeatureStepSuite:
[info] - sets up all keyRefs (23 milliseconds)
[info] RDriverFeatureStepSuite:
[info] - R Step modifies container correctly (85 milliseconds)
[info] ExecutorPodsPollingSnapshotSourceSuite:
[info] - Items returned by the API should be pushed to the event queue (16 milliseconds)
[info] BasicExecutorFeatureStepSuite:
[info] - basic executor pod has reasonable defaults (34 milliseconds)
[info] - executor pod hostnames get truncated to 63 characters (3 milliseconds)
[info] - classpath and extra java options get translated into environment variables (5 milliseconds)
[info] - test executor pyspark memory (6 milliseconds)
[info] DriverKubernetesCredentialsFeatureStepSuite:
[info] - Don't set any credentials (12 milliseconds)
[info] - Only set credentials that are manually mounted. (3 milliseconds)
[info] - Mount credentials from the submission client as a secret. (76 milliseconds)
[info] ClientSuite:
[info] - The client should configure the pod using the builder. (16 milliseconds)
[info] - The client should create Kubernetes resources (6 milliseconds)
[info] - Waiting for app completion should stall on the watcher (4 milliseconds)
[info] DriverServiceFeatureStepSuite:
[info] - Headless service has a port for the driver RPC and the block manager. (24 milliseconds)
[info] - Hostname and ports are set according to the service name. (1 millisecond)
[info] - Ports should resolve to defaults in SparkConf and in the service. (1 millisecond)
[info] - Long prefixes should switch to using a generated name. (3 milliseconds)
[info] - Disallow bind address and driver host to be set explicitly. (1 millisecond)
[info] KubernetesDriverBuilderSuite:
[info] - Apply fundamental steps all the time. (9 milliseconds)
[info] - Apply secrets step if secrets are present. (5 milliseconds)
[info] - Apply Java step if main resource is none. (4 milliseconds)
[info] - Apply Python step if main resource is python. (5 milliseconds)
[info] - Apply volumes step if mounts are present. (6 milliseconds)
[info] - Apply R step if main resource is R. (3 milliseconds)
[info] ExecutorPodsAllocatorSuite:
[info] - Initially request executors in batches. Do not request another batch if the first has not finished. (23 milliseconds)
[info] - Request executors in batches. Allow another batch to be requested if all pending executors start running. (15 milliseconds)
[info] - When a current batch reaches error states immediately, re-request them on the next batch. (14 milliseconds)
[info] - When an executor is requested but the API does not report it in a reasonable time, retry requesting that executor. (6 milliseconds)
[info] KubernetesClusterSchedulerBackendSuite:
[info] - Start all components (5 milliseconds)
[info] - Stop all components (10 milliseconds)
[info] - Remove executor (2 milliseconds)
[info] - Kill executors (8 milliseconds)
[info] - Request total executors (2 milliseconds)
[info] - SPARK-34407: CoarseGrainedSchedulerBackend.stop may throw SparkException (9 milliseconds)
[info] KubernetesConfSuite:
[info] - Basic driver translated fields. (7 milliseconds)
[info] - Creating driver conf with and without the main app jar influences spark.jars (6 milliseconds)
[info] - Creating driver conf with a python primary file (3 milliseconds)
[info] - Creating driver conf with a r primary file (2 milliseconds)
[info] - Testing explicit setting of memory overhead on non-JVM tasks (2 milliseconds)
[info] - Resolve driver labels, annotations, secret mount paths, envs, and memory overhead (6 milliseconds)
[info] - Basic executor translated fields. (1 millisecond)
[info] - Image pull secrets. (2 milliseconds)
[info] - Set executor labels, annotations, and secrets (4 milliseconds)
[info] KubernetesVolumeUtilsSuite:
[info] - Parses hostPath volumes correctly (9 milliseconds)
[info] - Parses persistentVolumeClaim volumes correctly (2 milliseconds)
[info] - Parses emptyDir volumes correctly (2 milliseconds)
[info] - Parses emptyDir volume options can be optional (1 millisecond)
[info] - Defaults optional readOnly to false (1 millisecond)
[info] - Gracefully fails on missing mount key (1 millisecond)
[info] - Gracefully fails on missing option key (1 millisecond)
[info] BasicDriverFeatureStepSuite:
[info] - Check the pod respects all configurations from the user. (10 milliseconds)
[info] - Check appropriate entrypoint rerouting for various bindings (3 milliseconds)
[info] - Additional system properties resolve jars and set cluster-mode confs. (2 milliseconds)
[info] ExecutorPodsSnapshotsStoreSuite:
[info] - Subscribers get notified of events periodically. (7 milliseconds)
[info] - Even without sending events, initially receive an empty buffer. (1 millisecond)
[info] - Replacing the snapshot passes the new snapshot to subscribers. (2 milliseconds)
[info] MountVolumesFeatureStepSuite:
[info] - Mounts hostPath volumes (5 milliseconds)
[info] - Mounts pesistentVolumeClaims (3 milliseconds)
[info] - Mounts emptyDir (4 milliseconds)
[info] - Mounts emptyDir with no options (1 millisecond)
[info] - Mounts multiple volumes (1 millisecond)
[info] MountSecretsFeatureStepSuite:
[info] - mounts all given secrets (6 milliseconds)
[info] ExecutorPodsLifecycleManagerSuite:
[info] - When an executor reaches error states immediately, remove from the scheduler backend. (12 milliseconds)
[info] - Don't remove executors twice from Spark but remove from K8s repeatedly. (4 milliseconds)
[info] - When the scheduler backend lists executor ids that aren't present in the cluster, remove those executors from Spark. (2 milliseconds)
[info] JavaDriverFeatureStepSuite:
[info] - Java Step modifies container correctly (2 milliseconds)
[info] ExecutorPodsWatchSnapshotSourceSuite:
[info] - Watch events should be pushed to the snapshots store as snapshot updates. (3 milliseconds)
[info] LocalDirsFeatureStepSuite:
[info] - Resolve to default local dir if neither env nor configuration are set (34 milliseconds)
[info] - Use configured local dirs split on comma if provided. (2 milliseconds)
[info] PythonDriverFeatureStepSuite:
[info] - Python Step modifies container correctly (4 milliseconds)
[info] - Python Step testing empty pyfiles (2 milliseconds)
[info] KubernetesExecutorBuilderSuite:
[info] - Basic steps are consistently applied. (3 milliseconds)
[info] - Apply secrets step if secrets are present. (1 millisecond)
[info] - Apply volumes step if mounts are present. (1 millisecond)
[info] FailureTrackerSuite:
[info] - failures expire if validity interval is set (301 milliseconds)
[info] - failures never expire if validity interval is not set (-1) (6 milliseconds)
[info] ClientSuite:
[info] - default Yarn application classpath (42 milliseconds)
[info] - default MR application classpath (1 millisecond)
[info] - resultant classpath for an application that defines a classpath for YARN (389 milliseconds)
[info] - resultant classpath for an application that defines a classpath for MR (35 milliseconds)
[info] - resultant classpath for an application that defines both classpaths, YARN and MR (40 milliseconds)
[info] - Local jar URIs (388 milliseconds)
[info] - Jar path propagation through SparkConf (630 milliseconds)
[info] - Cluster path translation (46 milliseconds)
[info] - configuration and args propagate through createApplicationSubmissionContext (137 milliseconds)
[info] - spark.yarn.jars with multiple paths and globs (285 milliseconds)
[info] - distribute jars archive (160 milliseconds)
[info] - distribute archive multiple times (735 milliseconds)
[info] - distribute local spark jars (168 milliseconds)
[info] - ignore same name jars (151 milliseconds)
[info] - SPARK-31582 Being able to not populate Hadoop classpath (102 milliseconds)
[info] - files URI match test1 (1 millisecond)
[info] - files URI match test2 (1 millisecond)
[info] - files URI match test3 (1 millisecond)
[info] - wasb URI match test (1 millisecond)
[info] - hdfs URI match test (1 millisecond)
[info] - files URI unmatch test1 (2 milliseconds)
[info] - files URI unmatch test2 (1 millisecond)
[info] - files URI unmatch test3 (1 millisecond)
[info] - wasb URI unmatch test1 (0 milliseconds)
[info] - wasb URI unmatch test2 (0 milliseconds)
[info] - s3 URI unmatch test (0 milliseconds)
[info] - hdfs URI unmatch test1 (0 milliseconds)
[info] - hdfs URI unmatch test2 (0 milliseconds)
[info] YarnAllocatorSuite:
[info] - single container allocated (234 milliseconds)
[info] - container should not be created if requested number if met (61 milliseconds)
[info] - some containers allocated (52 milliseconds)
[info] - receive more containers than requested (44 milliseconds)
[info] - decrease total requested executors (61 milliseconds)
[info] - decrease total requested executors to less than currently running (52 milliseconds)
[info] - kill executors (92 milliseconds)
[info] - kill same executor multiple times (51 milliseconds)
[info] - process same completed container multiple times (75 milliseconds)
[info] - lost executor removed from backend (64 milliseconds)
[info] - blacklisted nodes reflected in amClient requests (69 milliseconds)
[info] - memory exceeded diagnostic regexes (2 milliseconds)
[info] - window based failure executor counting (51 milliseconds)
[info] - SPARK-26269: YarnAllocator should have same blacklist behaviour with YARN (82 milliseconds)
[info] ClientDistributedCacheManagerSuite:
[info] - test getFileStatus empty (22 milliseconds)
[info] - test getFileStatus cached (1 millisecond)
[info] - test addResource (3 milliseconds)
[info] - test addResource link null (1 millisecond)
[info] - test addResource appmaster only (2 milliseconds)
[info] - test addResource archive (1 millisecond)
[info] ExtensionServiceIntegrationSuite:
[info] - recover from node failures with replication (10 seconds, 607 milliseconds)
[info] - Instantiate (8 milliseconds)
[info] - Contains SimpleExtensionService Service (3 milliseconds)
[info] YarnAllocatorBlacklistTrackerSuite:
[info] - expiring its own blacklisted nodes (2 milliseconds)
[info] - not handling the expiry of scheduler blacklisted nodes (1 millisecond)
[info] - combining scheduler and allocation blacklist (2 milliseconds)
[info] - blacklist all available nodes (2 milliseconds)
[info] YarnClusterSuite:
[info] - unpersist RDDs (4 seconds, 661 milliseconds)
[info] - End-to-end event logging with compression (15 seconds, 654 milliseconds)
[info] - Event logging with password redaction (29 milliseconds)
[info] - Log overwriting (84 milliseconds)
[info] - Event log name (1 millisecond)
[info] FileCommitProtocolInstantiationSuite:
[info] - Dynamic partitions require appropriate constructor (1 millisecond)
[info] - Standard partitions work with classic constructor (1 millisecond)
[info] - Three arg constructors have priority (1 millisecond)
[info] - Three arg constructors have priority when dynamic (0 milliseconds)
[info] - The protocol must be of the correct class (1 millisecond)
[info] - If there is no matching constructor, class hierarchy is irrelevant (1 millisecond)
[info] JobCancellationSuite:
[info] - local mode, FIFO scheduler (127 milliseconds)
[info] - local mode, fair scheduler (155 milliseconds)
[info] - reference partitions inside a task (3 seconds, 271 milliseconds)
[info] - cluster mode, FIFO scheduler (3 seconds, 727 milliseconds)
[info] ReceiverTrackerSuite:
[info] - cluster mode, fair scheduler (3 seconds, 570 milliseconds)
[info] - do not put partially executed partitions into cache (102 milliseconds)
[info] - send rate update to receivers (3 seconds, 382 milliseconds)
[info] - job group (87 milliseconds)
[info] - inherited job group (SPARK-6629) (79 milliseconds)
[info] - job group with interruption (100 milliseconds)
[info] - should restart receiver after stopping it (936 milliseconds)
[info] - SPARK-11063: TaskSetManager should use Receiver RDD's preferredLocations (578 milliseconds)
[info] - get allocated executors (791 milliseconds)
[info] RateLimitedOutputStreamSuite:
[info] - write (4 seconds, 195 milliseconds)
[info] RecurringTimerSuite:
[info] - basic (6 milliseconds)
[info] - SPARK-10224: call 'callback' after stopping (10 milliseconds)
[info] InputStreamsSuite:
Exception in thread "receiver-supervisor-future-0" java.lang.Error: java.lang.InterruptedException: sleep interrupted
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1155)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.InterruptedException: sleep interrupted
	at java.lang.Thread.sleep(Native Method)
	at org.apache.spark.streaming.receiver.ReceiverSupervisor$$anonfun$restartReceiver$1.apply$mcV$sp(ReceiverSupervisor.scala:196)
	at org.apache.spark.streaming.receiver.ReceiverSupervisor$$anonfun$restartReceiver$1.apply(ReceiverSupervisor.scala:189)
	at org.apache.spark.streaming.receiver.ReceiverSupervisor$$anonfun$restartReceiver$1.apply(ReceiverSupervisor.scala:189)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	... 2 more
[info] - socket input stream (755 milliseconds)
[info] - socket input stream - no block in a batch (392 milliseconds)
[info] - run Spark in yarn-client mode (22 seconds, 122 milliseconds)
[info] - binary records stream (6 seconds, 206 milliseconds)
[info] - file input stream - newFilesOnly = true (426 milliseconds)
[info] - file input stream - newFilesOnly = false (475 milliseconds)
[info] - file input stream - wildcard (653 milliseconds)
[info] - multi-thread receiver (1 second, 847 milliseconds)
[info] - task reaper kills JVM if killed tasks keep running for too long (17 seconds, 770 milliseconds)
[info] - queue input stream - oneAtATime = true (1 second, 139 milliseconds)
[info] - queue input stream - oneAtATime = false (2 seconds, 152 milliseconds)
[info] - test track the number of input stream (122 milliseconds)
[info] WriteAheadLogUtilsSuite:
[info] - log selection and creation (47 milliseconds)
[info] - wrap WriteAheadLog in BatchedWriteAheadLog when batching is enabled (5 milliseconds)
[info] - batching is enabled by default in WriteAheadLog (1 millisecond)
[info] - closeFileAfterWrite is disabled by default in WriteAheadLog (0 milliseconds)
[info] ReceiverSchedulingPolicySuite:
[info] - rescheduleReceiver: empty executors (1 millisecond)
[info] - rescheduleReceiver: receiver preferredLocation (1 millisecond)
[info] - rescheduleReceiver: return all idle executors if there are any idle executors (6 milliseconds)
[info] - rescheduleReceiver: return all executors that have minimum weight if no idle executors (3 milliseconds)
[info] - scheduleReceivers: schedule receivers evenly when there are more receivers than executors (3 milliseconds)
[info] - scheduleReceivers: schedule receivers evenly when there are more executors than receivers (4 milliseconds)
[info] - scheduleReceivers: schedule receivers evenly when the preferredLocations are even (7 milliseconds)
[info] - scheduleReceivers: return empty if no receiver (1 millisecond)
[info] - scheduleReceivers: return empty scheduled executors if no executors (2 milliseconds)
[info] PIDRateEstimatorSuite:
[info] - the right estimator is created (14 milliseconds)
[info] - estimator checks ranges (2 milliseconds)
[info] - first estimate is None (2 milliseconds)
[info] - second estimate is not None (1 millisecond)
[info] - no estimate when no time difference between successive calls (2 milliseconds)
[info] - no estimate when no records in previous batch (0 milliseconds)
[info] - no estimate when there is no processing delay (1 millisecond)
[info] - estimate is never less than min rate (22 milliseconds)
[info] - with no accumulated or positive error, |I| > 0, follow the processing speed (4 milliseconds)
[info] - with no accumulated but some positive error, |I| > 0, follow the processing speed (4 milliseconds)
[info] - with some accumulated and some positive error, |I| > 0, stay below the processing speed (21 milliseconds)
[info] ReceivedBlockHandlerSuite:
[info] - BlockManagerBasedBlockHandler - store blocks (834 milliseconds)
[info] - BlockManagerBasedBlockHandler - handle errors in storing block (23 milliseconds)
[info] - WriteAheadLogBasedBlockHandler - store blocks (673 milliseconds)
[info] - WriteAheadLogBasedBlockHandler - handle errors in storing block (31 milliseconds)
[info] - WriteAheadLogBasedBlockHandler - clean old blocks (104 milliseconds)
[info] - Test Block - count messages (153 milliseconds)
[info] - Test Block - isFullyConsumed (30 milliseconds)
[info] ReceivedBlockHandlerWithEncryptionSuite:
[info] - BlockManagerBasedBlockHandler - store blocks (464 milliseconds)
[info] - BlockManagerBasedBlockHandler - handle errors in storing block (10 milliseconds)
[info] - WriteAheadLogBasedBlockHandler - store blocks (692 milliseconds)
[info] - WriteAheadLogBasedBlockHandler - handle errors in storing block (32 milliseconds)
[info] - WriteAheadLogBasedBlockHandler - clean old blocks (103 milliseconds)
[info] - Test Block - count messages (192 milliseconds)
[info] - Test Block - isFullyConsumed (35 milliseconds)
[info] InputInfoTrackerSuite:
[info] - test report and get InputInfo from InputInfoTracker (1 millisecond)
[info] - test cleanup InputInfo from InputInfoTracker (1 millisecond)
[info] JobGeneratorSuite:
[info] - SPARK-6222: Do not clear received block data too soon (2 seconds, 453 milliseconds)
[info] ReceivedBlockTrackerSuite:
[info] - block addition, and block to batch allocation (6 milliseconds)
[info] - task reaper will not kill JVM if spark.task.killTimeout == -1 (13 seconds, 623 milliseconds)
[info] - two jobs sharing the same stage (107 milliseconds)
[info] - run Spark in yarn-cluster mode (20 seconds, 33 milliseconds)
[info] - interruptible iterator of shuffle reader (159 milliseconds)
[info] TaskContextSuite:
[info] - provide metrics sources (101 milliseconds)
[info] - calls TaskCompletionListener after failure (63 milliseconds)
[info] - calls TaskFailureListeners after failure (59 milliseconds)
[info] - all TaskCompletionListeners should be called even if some fail (6 milliseconds)
[info] - all TaskFailureListeners should be called even if some fail (5 milliseconds)
[info] - TaskContext.attemptNumber should return attempt number, not task id (SPARK-4014) (71 milliseconds)
[info] - TaskContext.stageAttemptNumber getter (535 milliseconds)
[info] - accumulators are updated on exception failures (117 milliseconds)
[info] - failed tasks collect only accumulators whose values count during failures (50 milliseconds)
[info] - only updated internal accumulators will be sent back to driver (61 milliseconds)
[info] - localProperties are propagated to executors correctly (146 milliseconds)
[info] - immediately call a completion listener if the context is completed (1 millisecond)
[info] - immediately call a failure listener if the context has failed (1 millisecond)
[info] - TaskCompletionListenerException.getMessage should include previousError (1 millisecond)
[info] - all TaskCompletionListeners should be called even if some fail or a task (4 milliseconds)
[info] DAGSchedulerSuite:
[info] - [SPARK-3353] parent stage should have lower stage id (69 milliseconds)
[info] - [SPARK-13902] Ensure no duplicate stages are created (22 milliseconds)
[info] - All shuffle files on the slave should be cleaned up when slave lost (108 milliseconds)
[info] - SPARK-32003: All shuffle files for executor should be cleaned up on fetch failure (129 milliseconds)
[info] - zero split job (5 milliseconds)
[info] - run trivial job (5 milliseconds)
[info] - run trivial job w/ dependency (4 milliseconds)
[info] - equals and hashCode AccumulableInfo (1 millisecond)
[info] - cache location preferences w/ dependency (8 milliseconds)
[info] - regression test for getCacheLocs (2 milliseconds)
[info] - getMissingParentStages should consider all ancestor RDDs' cache statuses (4 milliseconds)
[info] - avoid exponential blowup when getting preferred locs list (63 milliseconds)
[info] - unserializable task (7 milliseconds)
[info] - trivial job failure (5 milliseconds)
[info] - trivial job cancellation (4 milliseconds)
[info] - job cancellation no-kill backend (6 milliseconds)
[info] - run trivial shuffle (11 milliseconds)
[info] - run trivial shuffle with fetch failure (19 milliseconds)
[info] - shuffle files not lost when slave lost with shuffle service (125 milliseconds)
[info] - shuffle files lost when worker lost with shuffle service (112 milliseconds)
[info] - shuffle files lost when worker lost without shuffle service (103 milliseconds)
[info] - shuffle files not lost when executor failure with shuffle service (107 milliseconds)
[info] - shuffle files lost when executor failure without shuffle service (181 milliseconds)
[info] - Single stage fetch failure should not abort the stage. (43 milliseconds)
[info] - Multiple consecutive stage fetch failures should lead to job being aborted. (47 milliseconds)
[info] - Failures in different stages should not trigger an overall abort (51 milliseconds)
[info] - Non-consecutive stage failures don't trigger abort (60 milliseconds)
[info] - trivial shuffle with multiple fetch failures (9 milliseconds)
[info] - Retry all the tasks on a resubmitted attempt of a barrier stage caused by FetchFailure (30 milliseconds)
[info] - Retry all the tasks on a resubmitted attempt of a barrier stage caused by TaskKilled (26 milliseconds)
[info] - Fail the job if a barrier ResultTask failed (11 milliseconds)
[info] - late fetch failures don't cause multiple concurrent attempts for the same map stage (13 milliseconds)
[info] - extremely late fetch failures don't cause multiple concurrent attempts for the same stage (33 milliseconds)
[info] - task events always posted in speculation / when stage is killed (46 milliseconds)
[info] - ignore late map task completions (11 milliseconds)
[info] - run shuffle with map stage failure (5 milliseconds)
[info] - shuffle fetch failure in a reused shuffle dependency (18 milliseconds)
[info] - don't submit stage until its dependencies map outputs are registered (SPARK-5259) (24 milliseconds)
[info] - register map outputs correctly after ExecutorLost and task Resubmitted (11 milliseconds)
[info] - failure of stage used by two jobs (9 milliseconds)
[info] - stage used by two jobs, the first no longer active (SPARK-6880) (12 milliseconds)
[info] - stage used by two jobs, some fetch failures, and the first job no longer active (SPARK-6880) (24 milliseconds)
[info] - run trivial shuffle with out-of-band executor failure and retry (15 milliseconds)
[info] - recursive shuffle failures (26 milliseconds)
[info] - cached post-shuffle (45 milliseconds)
[info] - misbehaved accumulator should not crash DAGScheduler and SparkContext (24 milliseconds)
[info] - misbehaved accumulator should not impact other accumulators (15 milliseconds)
[info] - misbehaved resultHandler should not crash DAGScheduler and SparkContext (21 milliseconds)
[info] - getPartitions exceptions should not crash DAGScheduler and SparkContext (SPARK-8606) (17 milliseconds)
[info] - getPreferredLocations errors should not crash DAGScheduler and SparkContext (SPARK-8606) (16 milliseconds)
[info] - accumulator not calculated for resubmitted result stage (6 milliseconds)
[info] - accumulator not calculated for resubmitted task in result stage (5 milliseconds)
[info] - accumulators are updated on exception failures and task killed (6 milliseconds)
[info] - reduce tasks should be placed locally with map output (10 milliseconds)
[info] - reduce task locality preferences should only include machines with largest map outputs (11 milliseconds)
[info] - stages with both narrow and shuffle dependencies use narrow ones for locality (12 milliseconds)
[info] - Spark exceptions should include call site in stack trace (15 milliseconds)
[info] - catch errors in event loop (6 milliseconds)
[info] - simple map stage submission (16 milliseconds)
[info] - map stage submission with reduce stage also depending on the data (11 milliseconds)
[info] - map stage submission with fetch failure (21 milliseconds)
[info] - map stage submission with multiple shared stages and failures (29 milliseconds)
[info] - Trigger mapstage's job listener in submitMissingTasks (18 milliseconds)
[info] - map stage submission with executor failure late map task completions (17 milliseconds)
[info] - getShuffleDependencies correctly returns only direct shuffle parents (2 milliseconds)
[info] - block addition, and block to batch allocation with many blocks (13 seconds, 739 milliseconds)
[info] - recovery with write ahead logs should remove only allocated blocks from received queue (15 milliseconds)
[info] - block allocation to batch should not loose blocks from received queue (181 milliseconds)
[info] - recovery and cleanup with write ahead logs (42 milliseconds)
[info] - disable write ahead log when checkpoint directory is not set (1 millisecond)
[info] - parallel file deletion in FileBasedWriteAheadLog is robust to deletion error (26 milliseconds)
[info] WindowOperationsSuite:
[info] - window - basic window (521 milliseconds)
[info] - window - tumbling window (331 milliseconds)
[info] - SPARK-17644: After one stage is aborted for too many failed attempts, subsequent stagesstill behave correctly on fetch failures (1 second, 419 milliseconds)
[info] - [SPARK-19263] DAGScheduler should not submit multiple active tasksets, even with late completions from earlier stage attempts (22 milliseconds)
[info] - window - larger window (516 milliseconds)
[info] - task end event should have updated accumulators (SPARK-20342) (129 milliseconds)
[info] - Barrier task failures from the same stage attempt don't trigger multiple stage retries (11 milliseconds)
[info] - window - non-overlapping window (355 milliseconds)
[info] - Barrier task failures from a previous stage attempt don't trigger stage retry (11 milliseconds)
[info] - SPARK-23207: retry all the succeeding stages when the map stage is indeterminate (15 milliseconds)
[info] - window - persistence level (124 milliseconds)
[info] - SPARK-29042: Sampled RDD with unordered input should be indeterminate (5 milliseconds)
[info] - SPARK-23207: cannot rollback a result stage (9 milliseconds)
[info] - SPARK-23207: local checkpoint fail to rollback (checkpointed before) (32 milliseconds)
[info] - reduceByKeyAndWindow - basic reduction (328 milliseconds)
[info] - SPARK-23207: local checkpoint fail to rollback (checkpointing now) (10 milliseconds)
[info] - compacted topic (2 minutes, 5 seconds)
[info] - SPARK-23207: reliable checkpoint can avoid rollback (checkpointed before) (75 milliseconds)
[info] - reduceByKeyAndWindow - key already in window and new value added into window (290 milliseconds)
[info] - SPARK-23207: reliable checkpoint fail to rollback (checkpointing now) (23 milliseconds)
[info] - iterator boundary conditions (283 milliseconds)
[info] - executor sorting (9 milliseconds)
[info] - SPARK-28699: abort stage if parent stage is indeterminate stage (10 milliseconds)
[info] PrefixComparatorsSuite:
[info] - reduceByKeyAndWindow - new key added into window (323 milliseconds)
[info] - String prefix comparator (152 milliseconds)
[info] - Binary prefix comparator (9 milliseconds)
[info] - double prefix comparator handles NaNs properly (0 milliseconds)
[info] - double prefix comparator handles negative NaNs properly (0 milliseconds)
[info] - double prefix comparator handles other special values properly (1 millisecond)
[info] MasterWebUISuite:
[info] - reduceByKeyAndWindow - key removed from window (364 milliseconds)
[info] - kill application (273 milliseconds)
[info] - kill driver (123 milliseconds)
[info] SorterSuite:
[info] - equivalent to Arrays.sort (34 milliseconds)
[info] - KVArraySorter (89 milliseconds)
[info] - reduceByKeyAndWindow - larger slide time (414 milliseconds)
[info] - reduceByKeyAndWindow - big test (698 milliseconds)
[info] - reduceByKeyAndWindow with inverse function - basic reduction (332 milliseconds)
[info] - reduceByKeyAndWindow with inverse function - key already in window and new value added into window (298 milliseconds)
[info] - reduceByKeyAndWindow with inverse function - new key added into window (285 milliseconds)
[info] DirectKafkaStreamSuite:
[info] - reduceByKeyAndWindow with inverse function - key removed from window (345 milliseconds)
[info] - reduceByKeyAndWindow with inverse function - larger slide time (348 milliseconds)
[info] - reduceByKeyAndWindow with inverse function - big test (647 milliseconds)
[info] - reduceByKeyAndWindow with inverse and filter functions - big test (678 milliseconds)
[info] - groupByKeyAndWindow (533 milliseconds)
[info] - countByWindow (514 milliseconds)
[info] - basic stream receiving with multiple topics and smallest starting offset (2 seconds, 953 milliseconds)
[info] - countByValueAndWindow (400 milliseconds)
[info] StreamingListenerSuite:
[info] - run Spark in yarn-client mode with different configurations, ensuring redaction (19 seconds, 28 milliseconds)
[info] - batch info reporting (604 milliseconds)
[info] - receiver info reporting (151 milliseconds)
[info] - output operation reporting (515 milliseconds)
[info] - don't call ssc.stop in listener (975 milliseconds)
[info] - pattern based subscription (2 seconds, 970 milliseconds)
[info] - receiving from largest starting offset (329 milliseconds)
[info] - creating stream by offset (268 milliseconds)
[info] - onBatchCompleted with successful batch (1 second)
[info] - onBatchCompleted with failed batch and one failed job (1 second, 5 milliseconds)
[info] - onBatchCompleted with failed batch and multiple failed jobs (993 milliseconds)
[info] - StreamingListener receives no events after stopping StreamingListenerBus (399 milliseconds)
[info] ReceiverInputDStreamSuite:
[info] - Without WAL enabled: createBlockRDD creates empty BlockRDD when no block info (93 milliseconds)
[info] - Without WAL enabled: createBlockRDD creates correct BlockRDD with block info (99 milliseconds)
[info] - Without WAL enabled: createBlockRDD filters non-existent blocks before creating BlockRDD (99 milliseconds)
[info] - With WAL enabled: createBlockRDD creates empty WALBackedBlockRDD when no block info (94 milliseconds)
[info] - With WAL enabled: createBlockRDD creates correct WALBackedBlockRDD with all block info having WAL info (139 milliseconds)
[info] - offset recovery (3 seconds, 53 milliseconds)
[info] - With WAL enabled: createBlockRDD creates BlockRDD when some block info don't have WAL info (103 milliseconds)
[info] WriteAheadLogBackedBlockRDDSuite:
[info] - Read data available in both block manager and write ahead log (91 milliseconds)
[info] - Read data available only in block manager, not in write ahead log (49 milliseconds)
[info] - Read data available only in write ahead log, not in block manager (64 milliseconds)
[info] - Read data with partially available in block manager, and rest in write ahead log (57 milliseconds)
[info] - Test isBlockValid skips block fetching from BlockManager (137 milliseconds)
[info] - offset recovery from kafka (510 milliseconds)
[info] - Test whether RDD is valid after removing blocks from block manager (108 milliseconds)
[info] - Test storing of blocks recovered from write ahead log back into block manager (125 milliseconds)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@1caa8315 rejected from java.util.concurrent.ThreadPoolExecutor@126d7ef0[Shutting down, pool size = 5, active threads = 5, queued tasks = 0, completed tasks = 0]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.Promise$class.failure(Promise.scala:104)
	at scala.concurrent.impl.Promise$DefaultPromise.failure(Promise.scala:157)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:194)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:192)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
	at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@35c24fea rejected from java.util.concurrent.ThreadPoolExecutor@126d7ef0[Shutting down, pool size = 5, active threads = 5, queued tasks = 0, completed tasks = 0]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.Promise$class.failure(Promise.scala:104)
	at scala.concurrent.impl.Promise$DefaultPromise.failure(Promise.scala:157)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:194)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:192)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
	at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@719f98fe rejected from java.util.concurrent.ThreadPoolExecutor@126d7ef0[Shutting down, pool size = 5, active threads = 5, queued tasks = 0, completed tasks = 0]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.Promise$class.failure(Promise.scala:104)
	at scala.concurrent.impl.Promise$DefaultPromise.failure(Promise.scala:157)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:194)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:192)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
	at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@3c1d893 rejected from java.util.concurrent.ThreadPoolExecutor@126d7ef0[Shutting down, pool size = 5, active threads = 5, queued tasks = 0, completed tasks = 0]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.Promise$class.failure(Promise.scala:104)
	at scala.concurrent.impl.Promise$DefaultPromise.failure(Promise.scala:157)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:194)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:192)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
	at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@47ee3995 rejected from java.util.concurrent.ThreadPoolExecutor@126d7ef0[Shutting down, pool size = 5, active threads = 5, queued tasks = 0, completed tasks = 0]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.Promise$class.failure(Promise.scala:104)
	at scala.concurrent.impl.Promise$DefaultPromise.failure(Promise.scala:157)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:194)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:192)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
	at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@939b3e3 rejected from java.util.concurrent.ThreadPoolExecutor@126d7ef0[Shutting down, pool size = 5, active threads = 5, queued tasks = 0, completed tasks = 0]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@7b67f38e rejected from java.util.concurrent.ThreadPoolExecutor@126d7ef0[Shutting down, pool size = 5, active threads = 5, queued tasks = 0, completed tasks = 0]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@1eb02340 rejected from java.util.concurrent.ThreadPoolExecutor@126d7ef0[Shutting down, pool size = 5, active threads = 5, queued tasks = 0, completed tasks = 0]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@535074cb rejected from java.util.concurrent.ThreadPoolExecutor@126d7ef0[Shutting down, pool size = 5, active threads = 5, queued tasks = 0, completed tasks = 0]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@4d1652e5 rejected from java.util.concurrent.ThreadPoolExecutor@126d7ef0[Shutting down, pool size = 5, active threads = 5, queued tasks = 0, completed tasks = 0]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
[info] - read data in block manager and WAL with encryption on (163 milliseconds)
[info] TimeSuite:
[info] - less (1 millisecond)
[info] - lessEq (0 milliseconds)
[info] - greater (0 milliseconds)
[info] - greaterEq (1 millisecond)
[info] - plus (0 milliseconds)
[info] - minus Time (0 milliseconds)
[info] - minus Duration (0 milliseconds)
[info] - floor (1 millisecond)
[info] - isMultipleOf (0 milliseconds)
[info] - min (0 milliseconds)
[info] - max (1 millisecond)
[info] - until (2 milliseconds)
[info] - to (1 millisecond)
[info] DStreamScopeSuite:
[info] - dstream without scope (1 millisecond)
[info] - input dstream without scope (3 milliseconds)
[info] - scoping simple operations (9 milliseconds)
[info] - Direct Kafka stream report input information (505 milliseconds)
[info] - scoping nested operations (28 milliseconds)
[info] - transform should allow RDD operations to be captured in scopes (16 milliseconds)
[info] - foreachRDD should allow RDD operations to be captured in scope (24 milliseconds)
[info] ReceiverSuite:
[info] - maxMessagesPerPartition with backpressure disabled (119 milliseconds)
[info] - maxMessagesPerPartition with no lag (93 milliseconds)
[info] - maxMessagesPerPartition respects max rate (63 milliseconds)
[info] - receiver life cycle (344 milliseconds)
[info] - block generator throttling !!! IGNORED !!!
[info] - using rate controller (2 seconds, 140 milliseconds)
[info] - backpressure.initialRate should honor maxRatePerPartition (272 milliseconds)
[info] - use backpressure.initialRate with backpressure (223 milliseconds)
[info] - maxMessagesPerPartition with zero offset and rate equal to the specified minimum with default 1 (49 milliseconds)
[info] - SPARK-5984 TimSort bug (18 seconds, 129 milliseconds)
[info] - Sorter benchmark for key-value pairs !!! IGNORED !!!
[info] - Sorter benchmark for primitive int array !!! IGNORED !!!
[info] RandomSamplerSuite:
[info] - utilities (7 milliseconds)
[info] - sanity check medianKSD against references (86 milliseconds)
[info] - bernoulli sampling (47 milliseconds)
[info] - bernoulli sampling without iterator (45 milliseconds)
[info] - bernoulli sampling with gap sampling optimization (87 milliseconds)
[info] - bernoulli sampling (without iterator) with gap sampling optimization (94 milliseconds)
[info] - bernoulli boundary cases (1 millisecond)
[info] - bernoulli (without iterator) boundary cases (3 milliseconds)
[info] - bernoulli data types (75 milliseconds)
[info] - bernoulli clone (19 milliseconds)
[info] - bernoulli set seed (34 milliseconds)
[info] - replacement sampling (52 milliseconds)
[info] - replacement sampling without iterator (54 milliseconds)
[info] KafkaDataConsumerSuite:
[info] - replacement sampling with gap sampling (171 milliseconds)
[info] - KafkaDataConsumer reuse in case of same groupId and TopicPartition (5 milliseconds)
[info] - replacement sampling (without iterator) with gap sampling (138 milliseconds)
[info] - replacement boundary cases (1 millisecond)
[info] - replacement (without) boundary cases (1 millisecond)
[info] - replacement data types (82 milliseconds)
[info] - replacement clone (36 milliseconds)
[info] - replacement set seed (50 milliseconds)
[info] - bernoulli partitioning sampling (35 milliseconds)
[info] - bernoulli partitioning sampling without iterator (37 milliseconds)
[info] - bernoulli partitioning boundary cases (0 milliseconds)
[info] - bernoulli partitioning (without iterator) boundary cases (3 milliseconds)
[info] - bernoulli partitioning data (1 millisecond)
[info] - bernoulli partitioning clone (1 millisecond)
[info] PoolSuite:
[info] - FIFO Scheduler Test (73 milliseconds)
[info] - Fair Scheduler Test (77 milliseconds)
[info] - Nested Pool Test (92 milliseconds)
[info] - SPARK-17663: FairSchedulableBuilder sets default values for blank or invalid datas (9 milliseconds)
[info] - FIFO scheduler uses root pool and not spark.scheduler.pool property (100 milliseconds)
[info] - FAIR Scheduler uses default pool when spark.scheduler.pool property is not set (88 milliseconds)
[info] - FAIR Scheduler creates a new pool when spark.scheduler.pool property points to a non-existent pool (79 milliseconds)
[info] - Pool should throw IllegalArgumentException when schedulingMode is not supported (1 millisecond)
[info] - Fair Scheduler should build fair scheduler when valid spark.scheduler.allocation.file property is set (65 milliseconds)
[info] - Fair Scheduler should use default file(fairscheduler.xml) if it exists in classpath and spark.scheduler.allocation.file property is not set (99 milliseconds)
[info] - Fair Scheduler should throw FileNotFoundException when invalid spark.scheduler.allocation.file property is set (63 milliseconds)
[info] DiskStoreSuite:
[info] - reads of memory-mapped and non memory-mapped files are equivalent (33 milliseconds)
[info] - block size tracking (32 milliseconds)
[info] - blocks larger than 2gb (49 milliseconds)
[info] - block data encryption (90 milliseconds)
[info] BlockManagerReplicationSuite:
[info] - get peers with addition and removal of block managers (35 milliseconds)
[info] - concurrent use of KafkaDataConsumer (1 second, 873 milliseconds)
[info] - block replication - 2x replication (513 milliseconds)
[info] - write ahead log - generating and cleaning (8 seconds, 703 milliseconds)
[info] StateMapSuite:
[info] - EmptyStateMap (1 millisecond)
[info] - OpenHashMapBasedStateMap - put, get, getByTime, getAll, remove (11 milliseconds)
[info] - OpenHashMapBasedStateMap - put, get, getByTime, getAll, remove with copy (1 millisecond)
[info] - OpenHashMapBasedStateMap - serializing and deserializing (78 milliseconds)
[info] - OpenHashMapBasedStateMap - serializing and deserializing with compaction (11 milliseconds)
[info] - block replication - 3x replication (936 milliseconds)
[info] - block replication - mixed between 1x to 5x (1 second, 387 milliseconds)
[info] - block replication - off-heap (238 milliseconds)
[info] - block replication - 2x replication without peers (1 millisecond)
[info] - block replication - replication failures (85 milliseconds)
[info] - block replication - addition and deletion of block managers (214 milliseconds)
[info] BlockManagerProactiveReplicationSuite:
[info] - get peers with addition and removal of block managers (19 milliseconds)
[info] Test run started
[info] Test org.apache.spark.streaming.kafka010.JavaLocationStrategySuite.testLocationStrategyConstructors started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.006s
[info] Test run started
[info] Test org.apache.spark.streaming.kafka010.JavaKafkaRDDSuite.testKafkaRDD started
[info] - block replication - 2x replication (453 milliseconds)
[info] - block replication - 3x replication (851 milliseconds)
[info] - run Spark in yarn-cluster mode with different configurations, ensuring redaction (21 seconds, 28 milliseconds)
[info] - block replication - mixed between 1x to 5x (1 second, 326 milliseconds)
[info] - OpenHashMapBasedStateMap - all possible sequences of operations with copies  (6 seconds, 296 milliseconds)
[info] - OpenHashMapBasedStateMap - serializing and deserializing with KryoSerializable states (13 milliseconds)
[info] - EmptyStateMap - serializing and deserializing (24 milliseconds)
[info] - MapWithStateRDDRecord - serializing and deserializing with KryoSerializable states (12 milliseconds)
[info] UIUtilsSuite:
[info] - shortTimeUnitString (1 millisecond)
[info] - normalizeDuration (3 milliseconds)
[info] - convertToTimeUnit (0 milliseconds)
[info] - formatBatchTime (0 milliseconds)
[info] DurationSuite:
[info] - less (1 millisecond)
[info] - lessEq (0 milliseconds)
[info] - greater (1 millisecond)
[info] - greaterEq (0 milliseconds)
[info] - plus (1 millisecond)
[info] - minus (0 milliseconds)
[info] - times (1 millisecond)
[info] - div (0 milliseconds)
[info] - isMultipleOf (0 milliseconds)
[info] - min (0 milliseconds)
[info] - max (0 milliseconds)
[info] - isZero (1 millisecond)
[info] - Milliseconds (0 milliseconds)
[info] - Seconds (1 millisecond)
[info] - Minutes (0 milliseconds)
[info] MapWithStateRDDSuite:
[info] Test run finished: 0 failed, 0 ignored, 1 total, 3.064s
[info] Test run started
[info] Test org.apache.spark.streaming.kafka010.JavaConsumerStrategySuite.testConsumerStrategyConstructors started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.002s
[info] Test run started
[info] Test org.apache.spark.streaming.kafka010.JavaDirectKafkaStreamSuite.testKafkaStream started
[info] - block replication - off-heap (266 milliseconds)
[info] - block replication - 2x replication without peers (1 millisecond)
[info] - creation from pair RDD (319 milliseconds)
[info] - updating state and generating mapped data in MapWithStateRDDRecord (6 milliseconds)
[info] - block replication - replication failures (55 milliseconds)
[info] - block replication - addition and deletion of block managers (203 milliseconds)
[info] - proactive block replication - 2 replicas - 1 block manager deletions (79 milliseconds)
[info] - proactive block replication - 3 replicas - 2 block manager deletions (81 milliseconds)
[info] - states generated by MapWithStateRDD (1 second, 118 milliseconds)
[info] - proactive block replication - 4 replicas - 3 block manager deletions (532 milliseconds)
[info] - proactive block replication - 5 replicas - 4 block manager deletions (158 milliseconds)
[info] BlockManagerBasicStrategyReplicationSuite:
[info] - get peers with addition and removal of block managers (14 milliseconds)
[info] - checkpointing (1 second, 102 milliseconds)
[info] - block replication - 2x replication (435 milliseconds)
[info] - checkpointing empty state RDD (355 milliseconds)
[info] DStreamClosureSuite:
[info] - user provided closures are actually cleaned (47 milliseconds)
[info] UISeleniumSuite:
[info] Test run finished: 0 failed, 0 ignored, 1 total, 3.071s
[info] - block replication - 3x replication (988 milliseconds)
[info] SparkAWSCredentialsBuilderSuite:
[info] - should build DefaultCredentials when given no params (23 milliseconds)
[info] - should build BasicCredentials (2 milliseconds)
[info] - should build STSCredentials (1 millisecond)
[info] - SparkAWSCredentials classes should be serializable (5 milliseconds)
[info] KinesisCheckpointerSuite:
[info] - checkpoint is not called twice for the same sequence number (36 milliseconds)
[info] - checkpoint is called after sequence number increases (3 milliseconds)
[info] - should checkpoint if we have exceeded the checkpoint interval (12 milliseconds)
[info] - shouldn't checkpoint if we have not exceeded the checkpoint interval (1 millisecond)
[info] - should not checkpoint for the same sequence number (2 milliseconds)
[info] - removing checkpointer checkpoints one last time (1 millisecond)
[info] - if checkpointing is going on, wait until finished before removing and checkpointing (91 milliseconds)
[info] - block replication - mixed between 1x to 5x (1 second, 462 milliseconds)
[info] - block replication - off-heap (226 milliseconds)
[info] - attaching and detaching a Streaming tab (1 second, 953 milliseconds)
[info] FileBasedWriteAheadLogSuite:
[info] - block replication - 2x replication without peers (1 millisecond)
[info] - FileBasedWriteAheadLog - read all logs (33 milliseconds)
[info] - FileBasedWriteAheadLog - write logs (21 milliseconds)
[info] - FileBasedWriteAheadLog - read all logs after write (22 milliseconds)
[info] - FileBasedWriteAheadLog - clean old logs (20 milliseconds)
[info] - FileBasedWriteAheadLog - clean old logs synchronously (18 milliseconds)
[info] - block replication - replication failures (65 milliseconds)
[info] - FileBasedWriteAheadLog - handling file errors while reading rotating logs (63 milliseconds)
[info] - FileBasedWriteAheadLog - do not create directories or files unless write (2 milliseconds)
[info] - FileBasedWriteAheadLog - parallel recovery not enabled if closeFileAfterWrite = false (15 milliseconds)
[info] KinesisInputDStreamBuilderSuite:
[info] - should raise an exception if the StreamingContext is missing (8 milliseconds)
[info] - should raise an exception if the stream name is missing (7 milliseconds)
[info] - FileBasedWriteAheadLog - seqToParIterator (92 milliseconds)
[info] - should raise an exception if the checkpoint app name is missing (3 milliseconds)
[info] - FileBasedWriteAheadLogWriter - writing data (14 milliseconds)
[info] - FileBasedWriteAheadLogWriter - syncing of data by writing and reading immediately (15 milliseconds)
[info] - FileBasedWriteAheadLogReader - sequentially reading data (2 milliseconds)
[info] - FileBasedWriteAheadLogReader - sequentially reading data written with writer (5 milliseconds)
[info] - block replication - addition and deletion of block managers (230 milliseconds)
[info] FlatmapIteratorSuite:
[info] - should propagate required values to KinesisInputDStream (320 milliseconds)
[info] - should propagate default values to KinesisInputDStream (4 milliseconds)
[info] - should propagate custom non-auth values to KinesisInputDStream (11 milliseconds)
[info] - old Api should throw UnsupportedOperationExceptionexception with AT_TIMESTAMP (1 millisecond)
[info] - Flatmap Iterator to Disk (139 milliseconds)
[info] KinesisReceiverSuite:
[info] - process records including store and set checkpointer (4 milliseconds)
[info] - split into multiple processes if a limitation is set (2 milliseconds)
[info] - shouldn't store and update checkpointer when receiver is stopped (2 milliseconds)
[info] - shouldn't update checkpointer when exception occurs during store (7 milliseconds)
[info] - shutdown should checkpoint if the reason is TERMINATE (10 milliseconds)
[info] - shutdown should not checkpoint if the reason is something other than TERMINATE (2 milliseconds)
[info] - retry success on first attempt (2 milliseconds)
[info] - Flatmap Iterator to Memory (68 milliseconds)
[info] - retry success on second attempt after a Kinesis throttling exception (83 milliseconds)
[info] - Serializer Reset (82 milliseconds)
[info] - retry success on second attempt after a Kinesis dependency exception (35 milliseconds)
[info] - retry failed after a shutdown exception (4 milliseconds)
[info] - retry failed after an invalid state exception (4 milliseconds)
[info] - retry failed after unexpected exception (4 milliseconds)
[info] RDDSuite:
[info] - retry failed after exhausting all retries (97 milliseconds)
[info] WithAggregationKinesisBackedBlockRDDSuite:
[info] - Basic reading from Kinesis [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Read data available in both block manager and Kinesis [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Read data available only in block manager, not in Kinesis [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Read data available only in Kinesis, not in block manager [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Read data available partially in block manager, rest in Kinesis [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Test isBlockValid skips block fetching from block manager [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Test whether RDD is valid after removing blocks from block manager [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] WithoutAggregationKinesisBackedBlockRDDSuite:
[info] - Basic reading from Kinesis [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Read data available in both block manager and Kinesis [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Read data available only in block manager, not in Kinesis [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Read data available only in Kinesis, not in block manager [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Read data available partially in block manager, rest in Kinesis [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Test isBlockValid skips block fetching from block manager [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Test whether RDD is valid after removing blocks from block manager [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - FileBasedWriteAheadLogReader - reading data written with writer after corrupted write (791 milliseconds)
[info] - FileBasedWriteAheadLogReader - handles errors when file doesn't exist (4 milliseconds)
[info] - FileBasedWriteAheadLogRandomReader - reading data using random reader (15 milliseconds)
[info] - FileBasedWriteAheadLogRandomReader- reading data using random reader written with writer (10 milliseconds)
[info] FileBasedWriteAheadLogWithFileCloseAfterWriteSuite:
[info] - FileBasedWriteAheadLog - read all logs (47 milliseconds)
[info] WithAggregationKinesisStreamSuite:
[info] - FileBasedWriteAheadLog - write logs (40 milliseconds)
[info] - FileBasedWriteAheadLog - read all logs after write (62 milliseconds)
[info] - KinesisUtils API (22 milliseconds)
[info] - FileBasedWriteAheadLog - clean old logs (36 milliseconds)
[info] - basic operations (409 milliseconds)
[info] - RDD generation (34 milliseconds)
[info] - serialization (1 millisecond)
[info] - basic operation [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - custom message handling [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Kinesis read with custom configurations (5 milliseconds)
[info] - split and merge shards in a stream [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - failure recovery [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Prepare KinesisTestUtils [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - FileBasedWriteAheadLog - clean old logs synchronously (73 milliseconds)
[info] WithoutAggregationKinesisStreamSuite:
[info] - countApproxDistinct (76 milliseconds)
[info] - SparkContext.union (35 milliseconds)
[info] - KinesisUtils API (2 milliseconds)
[info] - RDD generation (3 milliseconds)
[info] - basic operation [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - custom message handling [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - SparkContext.union parallel partition listing (63 milliseconds)
[info] - Kinesis read with custom configurations (4 milliseconds)
[info] - split and merge shards in a stream [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - SparkContext.union creates UnionRDD if at least one RDD has no partitioner (3 milliseconds)
[info] - failure recovery [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Prepare KinesisTestUtils [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - SparkContext.union creates PartitionAwareUnionRDD if all RDDs have partitioners (4 milliseconds)
[info] - PartitionAwareUnionRDD raises exception if at least one RDD has no partitioner (2 milliseconds)
[info] - SPARK-23778: empty RDD in union should not produce a UnionRDD (5 milliseconds)
[info] - FileBasedWriteAheadLog - handling file errors while reading rotating logs (163 milliseconds)
[info] - FileBasedWriteAheadLog - do not create directories or files unless write (3 milliseconds)
[info] - FileBasedWriteAheadLog - parallel recovery not enabled if closeFileAfterWrite = false (8 milliseconds)
[info] - FileBasedWriteAheadLog - close after write flag (3 milliseconds)
[info] BatchedWriteAheadLogSuite:
[info] Test run started
[info] Test org.apache.spark.streaming.kinesis.JavaKinesisStreamSuite.testCustomHandlerAwsStsCreds started
[info] - BatchedWriteAheadLog - read all logs (23 milliseconds)
[info] - partitioner aware union (140 milliseconds)
[info] - UnionRDD partition serialized size should be small (4 milliseconds)
[info] - BatchedWriteAheadLog - write logs (26 milliseconds)
[info] - fold (10 milliseconds)
[info] - fold with op modifying first arg (12 milliseconds)
[info] - aggregate (15 milliseconds)
[info] - BatchedWriteAheadLog - read all logs after write (33 milliseconds)
[info] - BatchedWriteAheadLog - clean old logs (20 milliseconds)
[info] - BatchedWriteAheadLog - clean old logs synchronously (20 milliseconds)
[info] - BatchedWriteAheadLog - handling file errors while reading rotating logs (65 milliseconds)
[info] Test org.apache.spark.streaming.kinesis.JavaKinesisStreamSuite.testCustomHandlerAwsCreds started
[info] - BatchedWriteAheadLog - do not create directories or files unless write (3 milliseconds)
[info] - BatchedWriteAheadLog - parallel recovery not enabled if closeFileAfterWrite = false (9 milliseconds)
[info] - BatchedWriteAheadLog - serializing and deserializing batched records (2 milliseconds)
[info] - BatchedWriteAheadLog - failures in wrappedLog get bubbled up (19 milliseconds)
[info] - BatchedWriteAheadLog - name log with the highest timestamp of aggregated entries (15 milliseconds)
[info] - BatchedWriteAheadLog - shutdown properly (2 milliseconds)
[info] - BatchedWriteAheadLog - fail everything in queue during shutdown (5 milliseconds)
[info] BatchedWriteAheadLogWithCloseFileAfterWriteSuite:
[info] - BatchedWriteAheadLog - read all logs (37 milliseconds)
[info] Test org.apache.spark.streaming.kinesis.JavaKinesisStreamSuite.testCustomHandler started
[info] - BatchedWriteAheadLog - write logs (50 milliseconds)
[info] - BatchedWriteAheadLog - read all logs after write (127 milliseconds)
[info] - BatchedWriteAheadLog - clean old logs (41 milliseconds)
[info] Test org.apache.spark.streaming.kinesis.JavaKinesisStreamSuite.testAwsCreds started
[info] - BatchedWriteAheadLog - clean old logs synchronously (52 milliseconds)
[info] Test org.apache.spark.streaming.kinesis.JavaKinesisStreamSuite.testKinesisStream started
[info] - BatchedWriteAheadLog - handling file errors while reading rotating logs (172 milliseconds)
[info] - BatchedWriteAheadLog - do not create directories or files unless write (3 milliseconds)
[info] - BatchedWriteAheadLog - parallel recovery not enabled if closeFileAfterWrite = false (8 milliseconds)
[info] - BatchedWriteAheadLog - close after write flag (4 milliseconds)
[info] CheckpointSuite:
[info] - treeAggregate (799 milliseconds)
[info] - non-existent checkpoint dir (3 milliseconds)
[info] Test run finished: 0 failed, 0 ignored, 5 total, 0.98s
[info] Test run started
[info] Test org.apache.spark.streaming.kinesis.JavaKinesisInputDStreamBuilderSuite.testJavaKinesisDStreamBuilderOldApi started
[info] Test org.apache.spark.streaming.kinesis.JavaKinesisInputDStreamBuilderSuite.testJavaKinesisDStreamBuilder started
[info] Test run finished: 0 failed, 0 ignored, 2 total, 0.302s
[info] ScalaTest
[info] Run completed in 5 minutes, 39 seconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] Passed: Total 51, Failed 0, Errors 0, Passed 51
[info] ScalaTest
[info] Run completed in 5 minutes, 40 seconds.
[info] Total number of tests run: 29
[info] Suites: completed 3, aborted 0
[info] Tests: succeeded 29, failed 0, canceled 0, ignored 0, pending 0
[info] All tests passed.
[info] Passed: Total 29, Failed 0, Errors 0, Passed 29
[info] - treeAggregate with ops modifying first args (571 milliseconds)
[info] - treeReduce (254 milliseconds)
[info] - basic caching (27 milliseconds)
[info] - caching with failures (14 milliseconds)
[info] - empty RDD (114 milliseconds)
[info] ExecutorClassLoaderSuite:
[info] - repartitioned RDDs (638 milliseconds)
[info] - child over system classloader (852 milliseconds)
[info] - child first (55 milliseconds)
[info] - parent first (46 milliseconds)
[info] - child first can fall back (40 milliseconds)
[info] - child first can fail (51 milliseconds)
[info] - resource from parent (47 milliseconds)
[info] - resources from parent (38 milliseconds)
[info] - fetch classes using Spark's RpcEnv (213 milliseconds)
[info] ReplSuite:
[info] - yarn-cluster should respect conf overrides in SparkHadoopUtil (SPARK-16414, SPARK-23630) (18 seconds, 32 milliseconds)
[info] - basic rdd checkpoints + dstream graph checkpoint recovery (8 seconds, 919 milliseconds)
[info] - recovery of conf through checkpoints (254 milliseconds)
[info] - repartitioned RDDs perform load balancing (7 seconds, 595 milliseconds)
[info] - get correct spark.driver.[host|port] from checkpoint (176 milliseconds)
[info] - coalesced RDDs (228 milliseconds)
[info] - coalesced RDDs with locality (59 milliseconds)
[info] - coalesced RDDs with partial locality (46 milliseconds)
[info] - SPARK-30199 get ui port and blockmanager port (220 milliseconds)
-------------------------------------------
Time: 500 ms
-------------------------------------------
(a,2)
(b,1)

-------------------------------------------
Time: 1000 ms
-------------------------------------------
(,2)

-------------------------------------------
Time: 1500 ms
-------------------------------------------

-------------------------------------------
Time: 1500 ms
-------------------------------------------

-------------------------------------------
Time: 2000 ms
-------------------------------------------
(a,2)
(b,1)

-------------------------------------------
Time: 2500 ms
-------------------------------------------
(,2)

-------------------------------------------
Time: 3000 ms
-------------------------------------------

[info] - recovery with map and reduceByKey operations (493 milliseconds)
-------------------------------------------
Time: 500 ms
-------------------------------------------
(a,1)

-------------------------------------------
Time: 1000 ms
-------------------------------------------
(a,2)

-------------------------------------------
Time: 1500 ms
-------------------------------------------
(a,3)

-------------------------------------------
Time: 2000 ms
-------------------------------------------
(a,4)

-------------------------------------------
Time: 2500 ms
-------------------------------------------
(a,4)

[info] - propagation of local properties (5 seconds, 770 milliseconds)
-------------------------------------------
Time: 3000 ms
-------------------------------------------
(a,4)

-------------------------------------------
Time: 3500 ms
-------------------------------------------
(a,4)

-------------------------------------------
Time: 3500 ms
-------------------------------------------
(a,4)

-------------------------------------------
Time: 4000 ms
-------------------------------------------
(a,4)

-------------------------------------------
Time: 4500 ms
-------------------------------------------
(a,4)

-------------------------------------------
Time: 5000 ms
-------------------------------------------
(a,4)

[info] - coalesced RDDs with locality, large scale (10K partitions) (1 second, 495 milliseconds)
[info] - recovery with invertible reduceByKeyAndWindow operation (936 milliseconds)
-------------------------------------------
Time: 500 ms
-------------------------------------------
(a,2)
(b,1)

-------------------------------------------
Time: 1000 ms
-------------------------------------------
(,2)

-------------------------------------------
Time: 1500 ms
-------------------------------------------

[info] - coalesced RDDs with partial locality, large scale (10K partitions) (659 milliseconds)
[info] - coalesced RDDs with locality, fail first pass (18 milliseconds)
[info] - zipped RDDs (41 milliseconds)
[info] - partition pruning (20 milliseconds)
-------------------------------------------
Time: 1500 ms
-------------------------------------------

-------------------------------------------
Time: 2000 ms
-------------------------------------------
(a,2)
(b,1)

-------------------------------------------
Time: 2500 ms
-------------------------------------------
(,2)

-------------------------------------------
Time: 3000 ms
-------------------------------------------

[info] - recovery with saveAsHadoopFiles operation (1 second, 106 milliseconds)
-------------------------------------------
Time: 500 ms
-------------------------------------------
(a,2)
(b,1)

-------------------------------------------
Time: 1000 ms
-------------------------------------------
(,2)

-------------------------------------------
Time: 1500 ms
-------------------------------------------

-------------------------------------------
Time: 1500 ms
-------------------------------------------

-------------------------------------------
Time: 2000 ms
-------------------------------------------
(a,2)
(b,1)

-------------------------------------------
Time: 2500 ms
-------------------------------------------
(,2)

-------------------------------------------
Time: 3000 ms
-------------------------------------------

[info] - recovery with saveAsNewAPIHadoopFiles operation (993 milliseconds)
-------------------------------------------
Time: 500 ms
-------------------------------------------
(b,1)
(a,2)

-------------------------------------------
Time: 1000 ms
-------------------------------------------
(,2)

-------------------------------------------
Time: 1500 ms
-------------------------------------------

Spark context available as 'sc' (master = local, app id = local-1620405924379).
Spark session available as 'spark'.
-------------------------------------------
Time: 1500 ms
-------------------------------------------

-------------------------------------------
Time: 2000 ms
-------------------------------------------
(b,1)
(a,2)

-------------------------------------------
Time: 2500 ms
-------------------------------------------
(,2)

-------------------------------------------
Time: 3000 ms
-------------------------------------------

[info] - recovery with saveAsHadoopFile inside transform operation (1 second, 589 milliseconds)
-------------------------------------------
Time: 500 ms
-------------------------------------------
(a,1)

-------------------------------------------
Time: 1000 ms
-------------------------------------------
(a,2)

-------------------------------------------
Time: 1500 ms
-------------------------------------------
(a,3)

-------------------------------------------
Time: 2000 ms
-------------------------------------------
(a,4)

-------------------------------------------
Time: 2500 ms
-------------------------------------------
(a,5)

-------------------------------------------
Time: 3000 ms
-------------------------------------------
(a,6)

-------------------------------------------
Time: 3500 ms
-------------------------------------------
(a,7)

-------------------------------------------
Time: 3500 ms
-------------------------------------------
(a,7)

-------------------------------------------
Time: 4000 ms
-------------------------------------------
(a,8)

-------------------------------------------
Time: 4500 ms
-------------------------------------------
(a,9)

-------------------------------------------
Time: 5000 ms
-------------------------------------------
(a,10)

[info] - recovery with updateStateByKey operation (1 second, 82 milliseconds)
[info] - SPARK-15236: use Hive catalog (7 seconds, 860 milliseconds)
[info] - recovery maintains rate controller (2 seconds, 731 milliseconds)
[info] - collect large number of empty partitions (7 seconds, 744 milliseconds)
Spark context available as 'sc' (master = local, app id = local-1620405930362).
Spark session available as 'spark'.
Exception in thread "streaming-job-executor-0" java.lang.Error: java.lang.InterruptedException
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1155)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.InterruptedException
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
	at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:206)
	at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:222)
	at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:157)
	at org.apache.spark.util.ThreadUtils$.awaitReady(ThreadUtils.scala:243)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:750)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2067)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2088)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2107)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2132)
	at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:990)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
	at org.apache.spark.rdd.RDD.withScope(RDD.scala:385)
	at org.apache.spark.rdd.RDD.collect(RDD.scala:989)
	at org.apache.spark.streaming.TestOutputStream$$anonfun$$lessinit$greater$1.apply(TestSuiteBase.scala:100)
	at org.apache.spark.streaming.TestOutputStream$$anonfun$$lessinit$greater$1.apply(TestSuiteBase.scala:99)
	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:51)
	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51)
	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51)
	at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416)
	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:50)
	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
	at scala.util.Try$.apply(Try.scala:192)
	at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39)
	at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:257)
	at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257)
	at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257)
	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
	at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:256)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	... 2 more
[info] - take (1 second, 415 milliseconds)
[info] - top with predefined ordering (58 milliseconds)
[info] - top with custom ordering (10 milliseconds)
[info] - takeOrdered with predefined ordering (7 milliseconds)
[info] - takeOrdered with limit 0 (0 milliseconds)
[info] - takeOrdered with custom ordering (8 milliseconds)
[info] - isEmpty (54 milliseconds)
[info] - sample preserves partitioner (2 milliseconds)
[info] - SPARK-15236: use in-memory catalog (3 seconds, 459 milliseconds)
[info] - recovery with file input stream (3 seconds, 256 milliseconds)
[info] - DStreamCheckpointData.restore invoking times (252 milliseconds)
[info] - recovery from checkpoint contains array object (800 milliseconds)
[info] - SPARK-11267: the race condition of two checkpoints in a batch (40 milliseconds)
[info] - SPARK-28912: Fix MatchError in getCheckpointFiles (22 milliseconds)
[info] - SPARK-6847: stack overflow when updateStateByKey is followed by a checkpointed dstream (202 milliseconds)
Spark context available as 'sc' (master = local, app id = local-1620405933614).
Spark session available as 'spark'.
[info] MapWithStateSuite:
[info] - state - get, exists, update, remove,  (2 milliseconds)
[info] - mapWithState - basic operations with simple API (374 milliseconds)
[info] - mapWithState - basic operations with advanced API (317 milliseconds)
[info] - mapWithState - type inferencing and class tags (6 milliseconds)
[info] - mapWithState - states as mapped data (310 milliseconds)
[info] - mapWithState - initial states, with nothing returned as from mapping function (420 milliseconds)
[info] - mapWithState - state removing (457 milliseconds)
[info] - mapWithState - state timing out (1 second, 108 milliseconds)
[info] - mapWithState - checkpoint durations (57 milliseconds)
-------------------------------------------
Time: 1000 ms
-------------------------------------------

-------------------------------------------
Time: 2000 ms
-------------------------------------------
(a,1)

-------------------------------------------
Time: 3000 ms
-------------------------------------------
(a,2)
(b,1)

-------------------------------------------
Time: 3000 ms
-------------------------------------------
(a,2)
(b,1)

-------------------------------------------
Time: 4000 ms
-------------------------------------------
(a,3)
(b,2)
(c,1)

-------------------------------------------
Time: 5000 ms
-------------------------------------------
(c,1)
(a,4)
(b,3)

-------------------------------------------
Time: 6000 ms
-------------------------------------------
(b,3)
(c,1)
(a,5)

-------------------------------------------
Time: 7000 ms
-------------------------------------------
(a,5)
(b,3)
(c,1)

[info] - broadcast vars (5 seconds, 135 milliseconds)
[info] - mapWithState - driver failure recovery (526 milliseconds)
[info] BlockGeneratorSuite:
[info] - block generation and data callbacks (38 milliseconds)
[info] - stop ensures correct shutdown (231 milliseconds)
[info] - block push errors are reported (26 milliseconds)
[info] StreamingJobProgressListenerSuite:
[info] - onBatchSubmitted, onBatchStarted, onBatchCompleted, onReceiverStarted, onReceiverError, onReceiverStopped (64 milliseconds)
[info] - Remove the old completed batches when exceeding the limit (75 milliseconds)
[info] - out-of-order onJobStart and onBatchXXX (101 milliseconds)
[info] - detect memory leak (121 milliseconds)
[info] ExecutorAllocationManagerSuite:
[info] - basic functionality (46 milliseconds)
[info] - requestExecutors policy (14 milliseconds)
[info] - killExecutor policy (8 milliseconds)
[info] - parameter validation (13 milliseconds)
[info] - enabling and disabling (296 milliseconds)
Spark context available as 'sc' (master = local, app id = local-1620405938476).
Spark session available as 'spark'.
[info] RateLimiterSuite:
[info] - rate limiter initializes even without a maxRate set (1 millisecond)
[info] - rate limiter updates when below maxRate (0 milliseconds)
[info] - rate limiter stays below maxRate despite large updates (0 milliseconds)
[info] StreamingContextSuite:
[info] - from no conf constructor (50 milliseconds)
[info] - from no conf + spark home (49 milliseconds)
[info] - from no conf + spark home + env (51 milliseconds)
[info] - from conf with settings (70 milliseconds)
[info] - from existing SparkContext (50 milliseconds)
[info] - from existing SparkContext with settings (80 milliseconds)
[info] - from checkpoint (179 milliseconds)
[info] - checkPoint from conf (91 milliseconds)
[info] - state matching (1 millisecond)
[info] - start and stop state check (58 milliseconds)
[info] - run Spark in yarn-client mode with additional jar (21 seconds, 46 milliseconds)
[info] - start with non-serializable DStream checkpoints (109 milliseconds)
[info] - start failure should stop internal components (72 milliseconds)
[info] - start should set local properties of streaming jobs correctly (472 milliseconds)
[info] - start multiple times (55 milliseconds)
[info] - stop multiple times (60 milliseconds)
[info] - stop before start (62 milliseconds)
[info] - start after stop (66 milliseconds)
[info] - stop only streaming context (162 milliseconds)
[info] - stop(stopSparkContext=true) after stop(stopSparkContext=false) (73 milliseconds)
[info] - line wrapper only initialized once when used as encoder outer scope (3 seconds, 718 milliseconds)
Spark context available as 'sc' (master = local-cluster[1,1,1024], app id = app-20210507094544-0000).
Spark session available as 'spark'.

// Exiting paste mode, now interpreting.

[info] - stop gracefully (6 seconds, 75 milliseconds)
[info] - stop gracefully even if a receiver misses StopReceiver (646 milliseconds)
[info] - define case class and create Dataset together with paste mode (6 seconds, 932 milliseconds)
Spark context available as 'sc' (master = local, app id = local-1620405949141).
Spark session available as 'spark'.
[info] - takeSample (18 seconds, 523 milliseconds)
[info] - takeSample from an empty rdd (6 milliseconds)
[info] - randomSplit (282 milliseconds)
[info] - runJob on an invalid partition (4 milliseconds)
[info] - sort an empty RDD (31 milliseconds)
[info] - sortByKey (156 milliseconds)
Spark context available as 'sc' (master = local, app id = local-1620405949141).
Spark session available as 'spark'.
[info] - sortByKey ascending parameter (105 milliseconds)
[info] - sortByKey with explicit ordering (57 milliseconds)
[info] - repartitionAndSortWithinPartitions (38 milliseconds)
[info] - cartesian on empty RDD (8 milliseconds)
[info] - cartesian on non-empty RDDs (33 milliseconds)
[info] - intersection (46 milliseconds)
[info] - intersection strips duplicates in an input (42 milliseconds)
[info] - zipWithIndex (17 milliseconds)
[info] - zipWithIndex with a single partition (6 milliseconds)
[info] - zipWithIndex chained with other RDDs (SPARK-4433) (19 milliseconds)
[info] - zipWithUniqueId (36 milliseconds)
[info] - retag with implicit ClassTag (21 milliseconds)
[info] - parent method (3 milliseconds)
[info] - getNarrowAncestors (12 milliseconds)
[info] - getNarrowAncestors with multiple parents (11 milliseconds)
[info] - getNarrowAncestors with cycles (12 milliseconds)
[info] - task serialization exception should not hang scheduler (21 milliseconds)
[info] - RDD.partitions() fails fast when partitions indicies are incorrect (SPARK-13021) (2 milliseconds)
[info] - nested RDDs are not supported (SPARK-5063) (16 milliseconds)
[info] - actions cannot be performed inside of transformations (SPARK-5063) (16 milliseconds)
[info] - custom RDD coalescer (269 milliseconds)
[info] - SPARK-18406: race between end-of-task and completion iterator read lock release (20 milliseconds)
[info] - SPARK-23496: order of input partitions can result in severe skew in coalesce (5 milliseconds)
[info] - cannot run actions after SparkContext has been stopped (SPARK-5063) (140 milliseconds)
[info] - cannot call methods on a stopped SparkContext (SPARK-5063) (1 millisecond)
[info] BasicSchedulerIntegrationSuite:
[info] - super simple job (115 milliseconds)
[info] - :replay should work correctly (3 seconds, 689 milliseconds)
[info] - multi-stage job (136 milliseconds)
[info] - job with fetch failure (314 milliseconds)
[info] - job failure after 4 attempts (127 milliseconds)
[info] OutputCommitCoordinatorSuite:
[info] - Only one of two duplicate commit tasks should commit (55 milliseconds)
[info] - If commit fails, if task is retried it should not be locked, and will succeed. (51 milliseconds)
Spark context available as 'sc' (master = local, app id = local-1620405952761).
Spark session available as 'spark'.
[info] - spark-shell should find imported types in class constructors and extends clause (2 seconds, 349 milliseconds)
Spark context available as 'sc' (master = local, app id = local-1620405955166).
Spark session available as 'spark'.
[info] - Job should not complete if all commits are denied (5 seconds, 4 milliseconds)
[info] - Only authorized committer failures can clear the authorized committer lock (SPARK-6614) (5 milliseconds)
[info] - SPARK-19631: Do not allow failed attempts to be authorized for committing (5 milliseconds)
Spark context available as 'sc' (master = local, app id = local-1620405957808).
Spark session available as 'spark'.
[info] - SPARK-24589: Differentiate tasks from different stage attempts (4 milliseconds)
[info] - SPARK-24589: Make sure stage state is cleaned up (963 milliseconds)
[info] TaskMetricsSuite:
[info] - mutating values (1 millisecond)
[info] - mutating shuffle read metrics values (0 milliseconds)
[info] - mutating shuffle write metrics values (0 milliseconds)
[info] - mutating input metrics values (0 milliseconds)
[info] - mutating output metrics values (0 milliseconds)
[info] - merging multiple shuffle read metrics (0 milliseconds)
[info] - additional accumulables (1 millisecond)
[info] OutputCommitCoordinatorIntegrationSuite:
[info] - exception thrown in OutputCommitter.commitTask() (102 milliseconds)
[info] UIUtilsSuite:
[info] - makeDescription(plainText = false) (35 milliseconds)
[info] - makeDescription(plainText = true) (12 milliseconds)
[info] - SPARK-11906: Progress bar should not overflow because of speculative tasks (3 milliseconds)
[info] - decodeURLParameter (SPARK-12708: Sorting task error in Stages Page when yarn mode.) (1 millisecond)
[info] - SPARK-20393: Prevent newline characters in parameters. (0 milliseconds)
[info] - SPARK-20393: Prevent script from parameters running on page. (1 millisecond)
[info] - SPARK-20393: Prevent javascript from parameters running on page. (0 milliseconds)
[info] - SPARK-20393: Prevent links from parameters on page. (0 milliseconds)
[info] - SPARK-20393: Prevent popups from parameters on page. (0 milliseconds)
[info] SumEvaluatorSuite:
[info] - correct handling of count 1 (2 milliseconds)
[info] - correct handling of count 0 (1 millisecond)
[info] - correct handling of NaN (0 milliseconds)
[info] - correct handling of > 1 values (9 milliseconds)
[info] - test count > 1 (2 milliseconds)
[info] ApplicationCacheSuite:
[info] - Completed UI get (52 milliseconds)
[info] - Test that if an attempt ID is set, it must be used in lookups (5 milliseconds)
[info] - Incomplete apps refreshed (15 milliseconds)
[info] - spark-shell should shadow val/def definitions correctly (5 seconds, 674 milliseconds)
[info] - Large Scale Application Eviction (363 milliseconds)
[info] - Attempts are Evicted (20 milliseconds)
[info] - redirect includes query params (32 milliseconds)
[info] RpcAddressSuite:
[info] - hostPort (1 millisecond)
[info] - fromSparkURL (1 millisecond)
[info] - fromSparkURL: a typo url (1 millisecond)
[info] - fromSparkURL: invalid scheme (1 millisecond)
[info] - toSparkURL (1 millisecond)
[info] HistoryServerSuite:
[info] - run Spark in yarn-cluster mode with additional jar (21 seconds, 28 milliseconds)
Spark context available as 'sc' (master = local-cluster[1,1,1024], app id = app-20210507094601-0000).
Spark session available as 'spark'.
[info] - application list json (1 second, 422 milliseconds)
[info] - completed app list json (28 milliseconds)
[info] - running app list json (14 milliseconds)
[info] - minDate app list json (13 milliseconds)
[info] - maxDate app list json (12 milliseconds)
[info] - maxDate2 app list json (13 milliseconds)
[info] - minEndDate app list json (13 milliseconds)
[info] - maxEndDate app list json (21 milliseconds)
[info] - minEndDate and maxEndDate app list json (11 milliseconds)
[info] - minDate and maxEndDate app list json (11 milliseconds)
[info] - limit app list json (12 milliseconds)
[info] - one app json (94 milliseconds)
[info] - one app multi-attempt json (11 milliseconds)

// Exiting paste mode, now interpreting.

[info] - job list json (554 milliseconds)
[info] - job list from multi-attempt app json(1) (310 milliseconds)
[info] - job list from multi-attempt app json(2) (193 milliseconds)
[info] - one job json (10 milliseconds)
[info] - succeeded job list json (8 milliseconds)
[info] - succeeded&failed job list json (10 milliseconds)
[info] - executor list json (17 milliseconds)
[info] - stage list json (72 milliseconds)
[info] - complete stage list json (10 milliseconds)
[info] - failed stage list json (8 milliseconds)
[info] - one stage json (34 milliseconds)
[info] - one stage attempt json (30 milliseconds)
[info] - stop slow receiver gracefully (15 seconds, 886 milliseconds)
[info] - registering and de-registering of streamingSource (73 milliseconds)
[info] - stage task summary w shuffle write (349 milliseconds)
[info] - SPARK-28709 registering and de-registering of progressListener (134 milliseconds)
[info] - stage task summary w shuffle read (21 milliseconds)
[info] - stage task summary w/ custom quantiles (31 milliseconds)
[info] - stage task list (17 milliseconds)
[info] - stage task list w/ offset & length (34 milliseconds)
[info] - stage task list w/ sortBy (12 milliseconds)
[info] - stage task list w/ sortBy short names: -runtime (14 milliseconds)
[info] - stage task list w/ sortBy short names: runtime (13 milliseconds)
[info] - stage list with accumulable json (20 milliseconds)
[info] - stage with accumulable json (24 milliseconds)
[info] - stage task list from multi-attempt app json(1) (11 milliseconds)
[info] - stage task list from multi-attempt app json(2) (14 milliseconds)
[info] - blacklisting for stage (225 milliseconds)
[info] - blacklisting node for stage (228 milliseconds)
[info] - rdd list storage json (15 milliseconds)
[info] - executor node blacklisting (136 milliseconds)
[info] - executor node blacklisting unblacklisting (138 milliseconds)
[info] - executor memory usage (7 milliseconds)
[info] - app environment (64 milliseconds)
[info] - download all logs for app with multiple attempts (111 milliseconds)
[info] - download one log for app with multiple attempts (80 milliseconds)
[info] - response codes on bad paths (24 milliseconds)
[info] - automatically retrieve uiRoot from request through Knox (32 milliseconds)
[info] - static relative links are prefixed with uiRoot (spark.ui.proxyBase) (8 milliseconds)
[info] - /version api endpoint (6 milliseconds)
[info] - SPARK-26633: ExecutorClassLoader.getResourceAsStream find REPL classes (5 seconds, 385 milliseconds)
[info] SingletonReplSuite:
[info] - awaitTermination (2 seconds, 99 milliseconds)
[info] - awaitTermination after stop (63 milliseconds)
[info] - awaitTermination with error in task (384 milliseconds)
[info] - awaitTermination with error in job generation (475 milliseconds)
Spark context available as 'sc' (master = local-cluster[2,1,1024], app id = app-20210507094606-0000).
Spark session available as 'spark'.
[info] - awaitTerminationOrTimeout (1 second, 176 milliseconds)
[info] - getOrCreate (1 second, 50 milliseconds)
[info] - getActive and getActiveOrCreate (175 milliseconds)
[info] - ajax rendered relative links are prefixed with uiRoot (spark.ui.proxyBase) (4 seconds, 722 milliseconds)
[info] - security manager starts with spark.authenticate set (49 milliseconds)
[info] - getActiveOrCreate with checkpoint (927 milliseconds)
[info] - multiple streaming contexts (60 milliseconds)
[info] - DStream and generated RDD creation sites (462 milliseconds)
[info] - throw exception on using active or stopped context (67 milliseconds)
[info] - queueStream doesn't support checkpointing (579 milliseconds)
[info] - simple foreach with accumulator (3 seconds, 16 milliseconds)
[info] - Creating an InputDStream but not using it should not crash (949 milliseconds)
[info] - external vars (1 second, 550 milliseconds)
[info] - external classes (1 second, 5 milliseconds)
[info] - incomplete apps get refreshed (4 seconds, 356 milliseconds)
[info] - external functions (1 second, 4 milliseconds)
[info] - ui and api authorization checks (965 milliseconds)
[info] - access history application defaults to the last attempt id (282 milliseconds)
[info] JVMObjectTrackerSuite:
[info] - JVMObjectId does not take null IDs (2 milliseconds)
[info] - JVMObjectTracker (4 milliseconds)
[info] BlockManagerSuite:
[info] - StorageLevel object caching (0 milliseconds)
[info] - BlockManagerId object caching (1 millisecond)
[info] - BlockManagerId.isDriver() backwards-compatibility with legacy driver ids (SPARK-6716) (1 millisecond)
[info] - master + 1 manager interaction (54 milliseconds)
[info] - master + 2 managers interaction (112 milliseconds)
[info] - removing block (60 milliseconds)
[info] - removing rdd (57 milliseconds)
[info] - removing broadcast (311 milliseconds)
[info] - reregistration on heart beat (42 milliseconds)
[info] - reregistration on block update (39 milliseconds)
[info] - reregistration doesn't dead lock (687 milliseconds)
[info] - correct BlockResult returned from get() calls (108 milliseconds)
[info] - optimize a location order of blocks without topology information (35 milliseconds)
[info] - optimize a location order of blocks with topology information (49 milliseconds)
[info] - SPARK-9591: getRemoteBytes from another location when Exception throw (150 milliseconds)
[info] - run Spark in yarn-cluster mode unsuccessfully (18 seconds, 27 milliseconds)
[info] - external functions that access vars (3 seconds, 796 milliseconds)
[info] - SPARK-14252: getOrElseUpdate should still read from remote storage (100 milliseconds)
[info] - in-memory LRU storage (36 milliseconds)
[info] - in-memory LRU storage with serialization (82 milliseconds)
[info] - in-memory LRU storage with off-heap (77 milliseconds)
[info] - in-memory LRU for partitions of same RDD (31 milliseconds)
[info] - in-memory LRU for partitions of multiple RDDs (33 milliseconds)
[info] - on-disk storage (encryption = off) (78 milliseconds)
[info] - on-disk storage (encryption = on) (78 milliseconds)
[info] - disk and memory storage (encryption = off) (45 milliseconds)
[info] - disk and memory storage (encryption = on) (68 milliseconds)
[info] - disk and memory storage with getLocalBytes (encryption = off) (45 milliseconds)
[info] - disk and memory storage with getLocalBytes (encryption = on) (45 milliseconds)
[info] - disk and memory storage with serialization (encryption = off) (65 milliseconds)
[info] - broadcast vars (1 second, 504 milliseconds)
[info] - disk and memory storage with serialization (encryption = on) (61 milliseconds)
[info] - disk and memory storage with serialization and getLocalBytes (encryption = off) (65 milliseconds)
[info] - disk and memory storage with serialization and getLocalBytes (encryption = on) (54 milliseconds)
[info] - disk and off-heap memory storage (encryption = off) (70 milliseconds)
[info] - disk and off-heap memory storage (encryption = on) (80 milliseconds)
[info] - disk and off-heap memory storage with getLocalBytes (encryption = off) (70 milliseconds)
[info] - disk and off-heap memory storage with getLocalBytes (encryption = on) (67 milliseconds)
[info] - LRU with mixed storage levels (encryption = off) (88 milliseconds)
[info] - LRU with mixed storage levels (encryption = on) (106 milliseconds)
[info] - in-memory LRU with streams (encryption = off) (47 milliseconds)
[info] - in-memory LRU with streams (encryption = on) (32 milliseconds)
Exception in thread "streaming-job-executor-0" java.lang.Error: java.lang.InterruptedException
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1155)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.InterruptedException
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
	at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:206)
	at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:222)
	at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:157)
	at org.apache.spark.util.ThreadUtils$.awaitReady(ThreadUtils.scala:243)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:750)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2067)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2088)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2107)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2132)
	at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:990)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
	at org.apache.spark.rdd.RDD.withScope(RDD.scala:385)
	at org.apache.spark.rdd.RDD.collect(RDD.scala:989)
	at org.apache.spark.streaming.StreamingContextSuite$$anonfun$57$$anonfun$apply$21.apply(StreamingContextSuite.scala:850)
	at org.apache.spark.streaming.StreamingContextSuite$$anonfun$57$$anonfun$apply$21.apply(StreamingContextSuite.scala:848)
	at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:628)
	at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:628)
	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:51)
	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51)
	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51)
	at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416)
	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:50)
	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
	at scala.util.Try$.apply(Try.scala:192)
	at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39)
	at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:257)
	at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257)
	at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257)
	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
	at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:256)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	... 2 more
[info] - LRU with mixed storage levels and streams (encryption = off) (175 milliseconds)
[info] - SPARK-18560 Receiver data should be deserialized properly. (9 seconds, 542 milliseconds)
[info] - interacting with files (1 second, 521 milliseconds)
[info] - LRU with mixed storage levels and streams (encryption = on) (208 milliseconds)
[info] - negative byte values in ByteBufferInputStream (1 millisecond)
[info] - overly large block (46 milliseconds)
[info] - SPARK-22955 graceful shutdown shouldn't lead to job generation error (395 milliseconds)
[info] RateControllerSuite:
[info] - block compression (434 milliseconds)
[info] - RateController - rate controller publishes updates after batches complete (488 milliseconds)
[info] - block store put failure (15 milliseconds)
[info] - turn off updated block statuses (50 milliseconds)
[info] - updated block statuses (59 milliseconds)
[info] - query block statuses (79 milliseconds)
[info] - get matching blocks (78 milliseconds)
[info] - ReceiverRateController - published rates reach receivers (565 milliseconds)
[info] FailureSuite:
[info] - SPARK-1194 regression: fix the same-RDD rule for cache replacement (42 milliseconds)
[info] - safely unroll blocks through putIterator (disk) (54 milliseconds)
[info] - read-locked blocks cannot be evicted from memory (112 milliseconds)
[info] - remove block if a read fails due to missing DiskStore files (SPARK-15736) (289 milliseconds)
[info] - SPARK-13328: refresh block locations (fetch should fail after hitting a threshold) (37 milliseconds)
[info] - SPARK-13328: refresh block locations (fetch should succeed after location refresh) (43 milliseconds)
[info] - SPARK-17484: block status is properly updated following an exception in put() (80 milliseconds)
[info] - local-cluster mode (2 seconds, 505 milliseconds)
[info] - SPARK-17484: master block locations are updated following an invalid remote block fetch (122 milliseconds)
[info] - SPARK-1199 two instances of same class don't type check. (1 second, 5 milliseconds)
[info] - SPARK-2452 compound statements. (303 milliseconds)
[info] - SPARK-2576 importing implicits (3 seconds, 11 milliseconds)
[info] - SPARK-20640: Shuffle registration timeout and maxAttempts conf are working (5 seconds, 187 milliseconds)
[info] - fetch remote block to local disk if block size is larger than threshold (46 milliseconds)
[info] - query locations of blockIds (8 milliseconds)
[info] CompactBufferSuite:
[info] - empty buffer (2 milliseconds)
[info] - basic inserts (9 milliseconds)
[info] - adding sequences (5 milliseconds)
[info] - adding the same buffer to itself (3 milliseconds)
[info] MasterSuite:
[info] - can use a custom recovery mode factory (131 milliseconds)
[info] - master correctly recover the application (130 milliseconds)
[info] - master/worker web ui available (307 milliseconds)
[info] - Datasets and encoders (2 seconds, 5 milliseconds)
[info] - SPARK-2632 importing a method from non serializable class and not using it. (1 second, 504 milliseconds)
[info] - collecting objects of class defined in repl (1 second, 4 milliseconds)
[info] - collecting objects of class defined in repl - shuffling (1 second, 504 milliseconds)
[info] - replicating blocks of object with class defined in repl (2 seconds, 5 milliseconds)
[info] - should clone and clean line object in ClosureCleaner (3 seconds, 43 milliseconds)
[info] - SPARK-31399: should clone+clean line object w/ non-serializable state in ClosureCleaner (1 second, 4 milliseconds)
[info] - SPARK-31399: ClosureCleaner should discover indirectly nested closure in inner class (1 second, 4 milliseconds)
[info] - newProductSeqEncoder with REPL defined class (1 second, 5 milliseconds)
[info] DateExpressionsSuite:
[info] - datetime function current_date (115 milliseconds)
[info] - datetime function current_timestamp (2 milliseconds)
[info] - DayOfYear (2 seconds, 647 milliseconds)
[info] - run Spark in yarn-cluster mode failure after sc initialized (31 seconds, 36 milliseconds)
[info] - Year (9 seconds, 643 milliseconds)
[info] - master/worker web ui available with reverseProxy (30 seconds, 250 milliseconds)
[info] - basic scheduling - spread out (63 milliseconds)
[info] - basic scheduling - no spread out (69 milliseconds)
[info] - basic scheduling with more memory - spread out (47 milliseconds)
[info] - basic scheduling with more memory - no spread out (52 milliseconds)
[info] - scheduling with max cores - spread out (51 milliseconds)
[info] - scheduling with max cores - no spread out (53 milliseconds)
[info] - scheduling with cores per executor - spread out (67 milliseconds)
[info] - scheduling with cores per executor - no spread out (63 milliseconds)
[info] - scheduling with cores per executor AND max cores - spread out (61 milliseconds)
[info] - scheduling with cores per executor AND max cores - no spread out (86 milliseconds)
[info] - scheduling with executor limit - spread out (59 milliseconds)
[info] - scheduling with executor limit - no spread out (66 milliseconds)
[info] - scheduling with executor limit AND max cores - spread out (54 milliseconds)
[info] - multiple failures with map (38 seconds, 342 milliseconds)
[info] - scheduling with executor limit AND max cores - no spread out (279 milliseconds)
[info] - scheduling with executor limit AND cores per executor - spread out (46 milliseconds)
[info] - scheduling with executor limit AND cores per executor - no spread out (51 milliseconds)
[info] - scheduling with executor limit AND cores per executor AND max cores - spread out (51 milliseconds)
[info] - scheduling with executor limit AND cores per executor AND max cores - no spread out (68 milliseconds)
[info] - SPARK-13604: Master should ask Worker kill unknown executors and drivers (90 milliseconds)
[info] - SPARK-20529: Master should reply the address received from worker (77 milliseconds)
[info] - SPARK-19900: there should be a corresponding driver for the app after relaunching driver (2 seconds, 135 milliseconds)
[info] CompletionIteratorSuite:
[info] - basic test (1 millisecond)
[info] - reference to sub iterator should not be available after completion (536 milliseconds)
[info] SparkListenerSuite:
[info] - don't call sc.stop in listener (74 milliseconds)
[info] - basic creation and shutdown of LiveListenerBus (3 milliseconds)
[info] - bus.stop() waits for the event queue to completely drain (2 milliseconds)
[info] - metrics for dropped listener events (3 milliseconds)
[info] - basic creation of StageInfo (74 milliseconds)
[info] - basic creation of StageInfo with shuffle (196 milliseconds)
[info] - StageInfo with fewer tasks than partitions (73 milliseconds)
[info] - local metrics (1 second, 531 milliseconds)
[info] - onTaskGettingResult() called when result fetched remotely (389 milliseconds)
[info] - onTaskGettingResult() not called when result sent directly (86 milliseconds)
[info] - onTaskEnd() should be called for all started tasks, even after job has been killed (60 milliseconds)
[info] - SparkListener moves on if a listener throws an exception (12 milliseconds)
[info] - registering listeners via spark.extraListeners (80 milliseconds)
[info] - add and remove listeners to/from LiveListenerBus queues (5 milliseconds)
[info] - interrupt within listener is handled correctly: throw interrupt (24 milliseconds)
[info] - interrupt within listener is handled correctly: set Thread interrupted (22 milliseconds)
[info] - SPARK-30285: Fix deadlock in AsyncEventQueue.removeListenerOnError: throw interrupt (14 milliseconds)
[info] - SPARK-30285: Fix deadlock in AsyncEventQueue.removeListenerOnError: set Thread interrupted (1 millisecond)
[info] SortShuffleSuite:
[info] - Quarter (11 seconds, 178 milliseconds)
[info] - groupByKey without compression (191 milliseconds)
[info] - run Python application in yarn-client mode (21 seconds, 31 milliseconds)
[info] - shuffle non-zero block size (4 seconds, 885 milliseconds)
[info] - Month (6 seconds, 640 milliseconds)
[info] - shuffle serializer (4 seconds, 221 milliseconds)
[info] - zero sized blocks (6 seconds, 646 milliseconds)
[info] - zero sized blocks without kryo (6 seconds, 786 milliseconds)
[info] - run Python application in yarn-cluster mode (24 seconds, 57 milliseconds)
[info] - shuffle on mutable pairs (4 seconds, 267 milliseconds)
[info] - sorting on mutable pairs (3 seconds, 972 milliseconds)
[info] - Day / DayOfMonth (25 seconds, 226 milliseconds)
[info] - multiple failures with updateStateByKey (38 seconds, 913 milliseconds)
[info] BasicOperationsSuite:
[info] - map (318 milliseconds)
[info] - flatMap (340 milliseconds)
[info] - filter (289 milliseconds)
[info] - Seconds (1 second, 990 milliseconds)
[info] - glom (272 milliseconds)
[info] - mapPartitions (290 milliseconds)
[info] - DayOfWeek (546 milliseconds)
[info] - WeekDay (397 milliseconds)
[info] - repartition (more partitions) (580 milliseconds)
[info] - WeekOfYear (353 milliseconds)
[info] - cogroup using mutable pairs (4 seconds, 95 milliseconds)
[info] - repartition (fewer partitions) (468 milliseconds)
[info] - DateFormat (372 milliseconds)
[info] - groupByKey (336 milliseconds)
[info] - reduceByKey (354 milliseconds)
[info] - reduce (340 milliseconds)
[info] - count (534 milliseconds)
[info] - countByValue (374 milliseconds)
[info] - mapValues (360 milliseconds)
[info] - flatMapValues (316 milliseconds)
[info] - union (259 milliseconds)
[info] - union with input stream return None (172 milliseconds)
[info] - StreamingContext.union (295 milliseconds)
[info] - transform (267 milliseconds)
[info] - transform with NULL (157 milliseconds)
[info] - transform with input stream return None (182 milliseconds)
[info] - subtract mutable pairs (4 seconds, 206 milliseconds)
[info] - transformWith (380 milliseconds)
[info] - transformWith with input stream return None (149 milliseconds)
[info] - StreamingContext.transform (297 milliseconds)
[info] - StreamingContext.transform with input stream return None (139 milliseconds)
[info] - cogroup (367 milliseconds)
[info] - join (361 milliseconds)
[info] - leftOuterJoin (416 milliseconds)
[info] - rightOuterJoin (325 milliseconds)
[info] - fullOuterJoin (304 milliseconds)
[info] - updateStateByKey (373 milliseconds)
[info] - updateStateByKey - simple with initial value RDD (346 milliseconds)
[info] - updateStateByKey - testing time stamps as input (353 milliseconds)
[info] - updateStateByKey - with initial value RDD (342 milliseconds)
[info] - updateStateByKey - object lifecycle (352 milliseconds)
[info] - sort with Java non serializable class - Kryo (4 seconds, 757 milliseconds)
[info] - slice (2 seconds, 177 milliseconds)
[info] - slice - has not been initialized (79 milliseconds)
[info] - rdd cleanup - map and window (367 milliseconds)
[info] - rdd cleanup - updateStateByKey (742 milliseconds)
[info] - sort with Java non serializable class - Java (3 seconds, 450 milliseconds)
[info] - shuffle with different compression settings (SPARK-3426) (571 milliseconds)
[info] - [SPARK-4085] rerun map stage if reduce stage cannot find its local shuffle file (377 milliseconds)
[info] - cannot find its local shuffle file if no execution of the stage and rerun shuffle (93 milliseconds)
[info] - metrics for shuffle without aggregation (296 milliseconds)
Exception in thread "receiver-supervisor-future-0" java.lang.Error: java.lang.InterruptedException: sleep interrupted
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1155)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.InterruptedException: sleep interrupted
	at java.lang.Thread.sleep(Native Method)
	at org.apache.spark.streaming.receiver.ReceiverSupervisor$$anonfun$restartReceiver$1.apply$mcV$sp(ReceiverSupervisor.scala:196)
	at org.apache.spark.streaming.receiver.ReceiverSupervisor$$anonfun$restartReceiver$1.apply(ReceiverSupervisor.scala:189)
	at org.apache.spark.streaming.receiver.ReceiverSupervisor$$anonfun$restartReceiver$1.apply(ReceiverSupervisor.scala:189)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	... 2 more
[info] - rdd cleanup - input blocks and persisted RDDs (2 seconds, 290 milliseconds)
[info] JavaStreamingListenerWrapperSuite:
[info] - basic (13 milliseconds)
[info] Test run started
[info] Test org.apache.spark.streaming.JavaWriteAheadLogSuite.testCustomWAL started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.009s
[info] Test run started
[info] Test org.apache.spark.streaming.JavaMapWithStateSuite.testBasicFunction started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.542s
[info] Test run started
[info] Test org.apache.spark.streaming.JavaDurationSuite.testGreaterEq started
[info] Test org.apache.spark.streaming.JavaDurationSuite.testDiv started
[info] Test org.apache.spark.streaming.JavaDurationSuite.testMinus started
[info] Test org.apache.spark.streaming.JavaDurationSuite.testTimes started
[info] Test org.apache.spark.streaming.JavaDurationSuite.testLess started
[info] Test org.apache.spark.streaming.JavaDurationSuite.testPlus started
[info] Test org.apache.spark.streaming.JavaDurationSuite.testGreater started
[info] Test org.apache.spark.streaming.JavaDurationSuite.testMinutes started
[info] Test org.apache.spark.streaming.JavaDurationSuite.testMilliseconds started
[info] Test org.apache.spark.streaming.JavaDurationSuite.testLessEq started
[info] Test org.apache.spark.streaming.JavaDurationSuite.testSeconds started
[info] Test run finished: 0 failed, 0 ignored, 11 total, 0.003s
[info] Test run started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testStreamingContextTransform started
[info] - metrics for shuffle with aggregation (848 milliseconds)
[info] - multiple simultaneous attempts for one task (SPARK-8029) (69 milliseconds)
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testFlatMapValues started
[info] - SortShuffleManager properly cleans up files for shuffles that use the serialized path (152 milliseconds)
[info] - SortShuffleManager properly cleans up files for shuffles that use the deserialized path (92 milliseconds)
[info] TaskSchedulerImplSuite:
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testReduceByWindowWithInverse started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testMapPartitions started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testPairFilter started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testRepartitionFewerPartitions started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testCombineByKey started
[info] - Scheduler does not always schedule tasks on the same workers (1 second, 260 milliseconds)
[info] - run Python application in yarn-cluster mode using spark.yarn.appMasterEnv to override local envvar (25 seconds, 40 milliseconds)
[info] - Scheduler correctly accounts for multiple CPUs per task (61 milliseconds)
[info] - Scheduler does not crash when tasks are not serializable (54 milliseconds)
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testContextGetOrCreate started
[info] - concurrent attempts for the same stage only have one active taskset (70 milliseconds)
[info] - don't schedule more tasks after a taskset is zombie (65 milliseconds)
[info] - if a zombie attempt finishes, continue scheduling tasks for non-zombie attempts (64 milliseconds)
[info] - tasks are not re-scheduled while executor loss reason is pending (49 milliseconds)
[info] - scheduled tasks obey task and stage blacklists (156 milliseconds)
[info] - scheduled tasks obey node and executor blacklists (106 milliseconds)
[info] - abort stage when all executors are blacklisted and we cannot acquire new executor (69 milliseconds)
[info] - SPARK-22148 abort timer should kick in when task is completely blacklisted & no new executor can be acquired (82 milliseconds)
[info] - SPARK-22148 try to acquire a new executor when task is unschedulable with 1 executor (66 milliseconds)
[info] - SPARK-22148 abort timer should clear unschedulableTaskSetToExpiryTime for all TaskSets (73 milliseconds)
[info] - SPARK-22148 Ensure we don't abort the taskSet if we haven't been completely blacklisted (59 milliseconds)
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testWindowWithSlideDuration started
[info] - Blacklisted node for entire task set prevents per-task blacklist checks: iteration 0 (113 milliseconds)
[info] - Blacklisted node for entire task set prevents per-task blacklist checks: iteration 1 (129 milliseconds)
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testQueueStream started
[info] - Blacklisted node for entire task set prevents per-task blacklist checks: iteration 2 (113 milliseconds)
[info] - Blacklisted node for entire task set prevents per-task blacklist checks: iteration 3 (104 milliseconds)
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testCountByValue started
[info] - Blacklisted node for entire task set prevents per-task blacklist checks: iteration 4 (116 milliseconds)
[info] - Blacklisted node for entire task set prevents per-task blacklist checks: iteration 5 (162 milliseconds)
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testMap started
[info] - Blacklisted node for entire task set prevents per-task blacklist checks: iteration 6 (110 milliseconds)
[info] - Blacklisted node for entire task set prevents per-task blacklist checks: iteration 7 (117 milliseconds)
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testPairToNormalRDDTransform started
[info] - Blacklisted node for entire task set prevents per-task blacklist checks: iteration 8 (128 milliseconds)
[info] - Blacklisted node for entire task set prevents per-task blacklist checks: iteration 9 (132 milliseconds)
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testPairReduceByKey started
[info] - Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 0 (122 milliseconds)
[info] - Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 1 (112 milliseconds)
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testCount started
[info] - Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 2 (113 milliseconds)
[info] - Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 3 (200 milliseconds)
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testCheckpointMasterRecovery started
[info] - Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 4 (106 milliseconds)
[info] - Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 5 (109 milliseconds)
[info] - Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 6 (109 milliseconds)
[info] - Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 7 (135 milliseconds)
[info] - Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 8 (134 milliseconds)
[info] - Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 9 (150 milliseconds)
[info] - abort stage if executor loss results in unschedulability from previously failed tasks (49 milliseconds)
[info] - don't abort if there is an executor available, though it hasn't had scheduled tasks yet (50 milliseconds)
[info] - SPARK-16106 locality levels updated if executor added to existing host (49 milliseconds)
[info] - scheduler checks for executors that can be expired from blacklist (47 milliseconds)
[info] - if an executor is lost then the state for its running tasks is cleaned up (SPARK-18553) (214 milliseconds)
[info] - if a task finishes with TaskState.LOST its executor is marked as dead (59 milliseconds)
[info] - Locality should be used for bulk offers even with delay scheduling off (50 milliseconds)
[info] - With delay scheduling off, tasks can be run at any locality level immediately (46 milliseconds)
[info] - TaskScheduler should throw IllegalArgumentException when schedulingMode is not supported (47 milliseconds)
[info] - Completions in zombie tasksets update status of non-zombie taskset (117 milliseconds)
[info] - don't schedule for a barrier taskSet if available slots are less than pending tasks (55 milliseconds)
[info] - schedule tasks for a barrier taskSet if all tasks can be launched together (60 milliseconds)
[info] - SPARK-29263: barrier TaskSet can't schedule when higher prio taskset takes the slots (52 milliseconds)
[info] - cancelTasks shall kill all the running tasks and fail the stage (51 milliseconds)
[info] - killAllTaskAttempts shall kill all the running tasks and not fail the stage (54 milliseconds)
[info] - mark taskset for a barrier stage as zombie in case a task fails (60 milliseconds)
[info] ChunkedByteBufferFileRegionSuite:
[info] - transferTo can stop and resume correctly (3 milliseconds)
[info] - transfer to with random limits (332 milliseconds)
[info] CryptoStreamUtilsSuite:
[info] - crypto configuration conversion (1 millisecond)
[info] - shuffle encryption key length should be 128 by default (1 millisecond)
[info] - create 256-bit key (1 millisecond)
[info] - create key with invalid length (1 millisecond)
[info] - serializer manager integration (4 milliseconds)
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testPairMap started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testUnion started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testFlatMap started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testReduceByKeyAndWindowWithInverse started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testGlom started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testJoin started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testPairFlatMap started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testPairToPairFlatMapWithChangingTypes started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testPairMapPartitions started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testRepartitionMorePartitions started
[info] - encryption key propagation to executors (3 seconds, 259 milliseconds)
[info] - crypto stream wrappers (11 milliseconds)
[info] - error handling wrapper (5 milliseconds)
[info] RBackendSuite:
[info] - close() clears jvmObjectTracker (1 millisecond)
[info] NettyRpcAddressSuite:
[info] - toString (0 milliseconds)
[info] - toString for client mode (1 millisecond)
[info] SparkContextSchedulerCreationSuite:
[info] - bad-master (52 milliseconds)
[info] - local (51 milliseconds)
[info] - local-* (46 milliseconds)
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testReduceByWindowWithoutInverse started
[info] - local-n (51 milliseconds)
[info] - local-*-n-failures (73 milliseconds)
[info] - local-n-failures (78 milliseconds)
[info] - bad-local-n (56 milliseconds)
[info] - bad-local-n-failures (67 milliseconds)
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testLeftOuterJoin started
[info] - local-default-parallelism (62 milliseconds)
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testVariousTransform started
[info] - local-cluster (279 milliseconds)
[info] CommandUtilsSuite:
[info] - set libraryPath correctly (29 milliseconds)
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testTransformWith started
[info] - auth secret shouldn't appear in java opts (73 milliseconds)
[info] ImplicitOrderingSuite:
[info] - basic inference of Orderings (99 milliseconds)
[info] AsyncRDDActionsSuite:
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testVariousTransformWith started
[info] - countAsync (14 milliseconds)
[info] - collectAsync (14 milliseconds)
[info] - foreachAsync (11 milliseconds)
[info] - foreachPartitionAsync (13 milliseconds)
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testTextFileStream started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testPairGroupByKey started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testCoGroup started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testInitialization started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testSocketString started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testGroupByKeyAndWindow started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testReduceByKeyAndWindow started
[info] - takeAsync (1 second, 568 milliseconds)
[info] - async success handling (7 milliseconds)
[info] - async failure handling (10 milliseconds)
[info] - FutureAction result, infinite wait (8 milliseconds)
[info] - FutureAction result, finite wait (6 milliseconds)
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testForeachRDD started
[info] - FutureAction result, timeout (23 milliseconds)
[info] - SimpleFutureAction callback must not consume a thread while waiting (33 milliseconds)
[info] - ComplexFutureAction callback must not consume a thread while waiting (22 milliseconds)
[info] JsonProtocolSuite:
[info] - writeApplicationInfo (6 milliseconds)
[info] - writeWorkerInfo (1 millisecond)
[info] - writeApplicationDescription (3 milliseconds)
[info] - writeExecutorRunner (2 milliseconds)
[info] - writeDriverInfo (5 milliseconds)
[info] - writeMasterState (2 milliseconds)
[info] - writeWorkerState (111 milliseconds)
[info] VersionUtilsSuite:
[info] - Parse Spark major version (3 milliseconds)
[info] - Parse Spark minor version (2 milliseconds)
[info] - Parse Spark major and minor versions (1 millisecond)
[info] - Return short version number (2 milliseconds)
[info] ShuffleBlockFetcherIteratorSuite:
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testFileStream started
[info] - successful 3 local reads + 2 remote reads (52 milliseconds)
[info] - release current unexhausted buffer in case the task completes early (18 milliseconds)
[info] - fail all blocks if any of the remote request fails (18 milliseconds)
[info] - retry corrupt blocks (28 milliseconds)
[info] - big blocks are not checked for corruption (9 milliseconds)
[info] - retry corrupt blocks (disabled) (17 milliseconds)
[info] - Blocks should be shuffled to disk when size of the request is above the threshold(maxReqSizeShuffleToMem). (21 milliseconds)
[info] - fail zero-size blocks (12 milliseconds)
[info] BlockManagerMasterSuite:
[info] - SPARK-31422: getMemoryStatus should not fail after BlockManagerMaster stops (3 milliseconds)
[info] - SPARK-31422: getStorageStatus should not fail after BlockManagerMaster stops (1 millisecond)
[info] DiskBlockManagerSuite:
[info] - basic block creation (1 millisecond)
[info] - enumerating blocks (22 milliseconds)
[info] - SPARK-22227: non-block files are skipped (1 millisecond)
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testPairTransform started
[info] TaskSetManagerSuite:
[info] - TaskSet with no preferences (72 milliseconds)
[info] - multiple offers with no preferences (72 milliseconds)
[info] - skip unsatisfiable locality levels (59 milliseconds)
[info] - basic delay scheduling (63 milliseconds)
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testFilter started
[info] - we do not need to delay scheduling when we only have noPref tasks in the queue (85 milliseconds)
[info] - delay scheduling with fallback (78 milliseconds)
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testPairMap2 started
[info] - delay scheduling with failed hosts (66 milliseconds)
[info] - task result lost (64 milliseconds)
[info] - repeated failures lead to task set abortion (57 milliseconds)
[info] - executors should be blacklisted after task failure, in spite of locality preferences (63 milliseconds)
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testMapValues started
[info] - new executors get added and lost (67 milliseconds)
[info] - Executors exit for reason unrelated to currently running tasks (61 milliseconds)
[info] - test RACK_LOCAL tasks (68 milliseconds)
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testReduce started
[info] - do not emit warning when serialized task is small (59 milliseconds)
[info] - emit warning when serialized task is large (58 milliseconds)
[info] - Not serializable exception thrown if the task cannot be serialized (64 milliseconds)
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testUpdateStateByKey started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testTransform started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testWindow started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testCountByValueAndWindow started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testRawSocketStream started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testSocketTextStream started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testUpdateStateByKeyWithInitial started
[info] Test test.org.apache.spark.streaming.JavaAPISuite.testContextState started
[info] - abort the job if total size of results is too large (1 second, 680 milliseconds)
Exception in thread "task-result-getter-3" java.lang.Error: java.lang.InterruptedException
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1155)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.InterruptedException
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
	at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:206)
	at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:222)
	at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:227)
	at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:220)
	at org.apache.spark.network.BlockTransferService.fetchBlockSync(BlockTransferService.scala:121)
	at org.apache.spark.storage.BlockManager.getRemoteBytes(BlockManager.scala:757)
	at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply$mcV$sp(TaskResultGetter.scala:88)
	at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply(TaskResultGetter.scala:63)
	at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply(TaskResultGetter.scala:63)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1945)
	at org.apache.spark.scheduler.TaskResultGetter$$anon$3.run(TaskResultGetter.scala:62)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	... 2 more
[info] Test run finished: 0 failed, 0 ignored, 53 total, 17.052s
[info] - [SPARK-13931] taskSetManager should not send Resubmitted tasks after being a zombie (62 milliseconds)
[info] Test run started
[info] Test test.org.apache.spark.streaming.Java8APISuite.testStreamingContextTransform started
[info] - [SPARK-22074] Task killed by other attempt task should not be resubmitted (98 milliseconds)
[info] - speculative and noPref task should be scheduled after node-local (82 milliseconds)
[info] Test test.org.apache.spark.streaming.Java8APISuite.testFlatMapValues started
[info] - node-local tasks should be scheduled right away when there are only node-local and no-preference tasks (91 milliseconds)
[info] - SPARK-4939: node-local tasks should be scheduled right after process-local tasks finished (81 milliseconds)
[info] - SPARK-4939: no-pref tasks should be scheduled after process-local tasks finished (97 milliseconds)
[info] Test test.org.apache.spark.streaming.Java8APISuite.testMapPartitions started
[info] - Ensure TaskSetManager is usable after addition of levels (55 milliseconds)
[info] - Test that locations with HDFSCacheTaskLocation are treated as PROCESS_LOCAL. (61 milliseconds)
[info] - Test TaskLocation for different host type. (1 millisecond)
[info] - Kill other task attempts when one attempt belonging to the same task succeeds (55 milliseconds)
[info] - Killing speculative tasks does not count towards aborting the taskset (99 milliseconds)
[info] Test test.org.apache.spark.streaming.Java8APISuite.testPairFilter started
[info] - SPARK-19868: DagScheduler only notified of taskEnd when state is ready (75 milliseconds)
[info] - SPARK-17894: Verify TaskSetManagers for different stage attempts have unique names (45 milliseconds)
[info] - don't update blacklist for shuffle-fetch failures, preemption, denied commits, or killed tasks (59 milliseconds)
[info] Test test.org.apache.spark.streaming.Java8APISuite.testCombineByKey started
[info] - update application blacklist for shuffle-fetch (67 milliseconds)
[info] - update blacklist before adding pending task to avoid race condition (58 milliseconds)
[info] - SPARK-21563 context's added jars shouldn't change mid-TaskSet (51 milliseconds)
[info] Test test.org.apache.spark.streaming.Java8APISuite.testMap started
[info] - [SPARK-24677] Avoid NoSuchElementException from MedianHeap (59 milliseconds)
[info] - SPARK-24755 Executor loss can cause task to not be resubmitted (54 milliseconds)
[info] - SPARK-13343 speculative tasks that didn't commit shouldn't be marked as success (57 milliseconds)
[info] DiskBlockObjectWriterSuite:
[info] Test test.org.apache.spark.streaming.Java8APISuite.testPairToNormalRDDTransform started
[info] Test test.org.apache.spark.streaming.Java8APISuite.testPairReduceByKey started
[info] - verify write metrics (327 milliseconds)
[info] Test test.org.apache.spark.streaming.Java8APISuite.testPairMap started
[info] - verify write metrics on revert (205 milliseconds)
[info] - Reopening a closed block writer (2 milliseconds)
[info] - calling revertPartialWritesAndClose() on a partial write should truncate up to commit (1 millisecond)
[info] - calling revertPartialWritesAndClose() after commit() should have no effect (2 milliseconds)
[info] - calling revertPartialWritesAndClose() on a closed block writer should have no effect (3 milliseconds)
[info] - commit() and close() should be idempotent (4 milliseconds)
[info] - revertPartialWritesAndClose() should be idempotent (2 milliseconds)
[info] - commit() and close() without ever opening or writing (1 millisecond)
[info] PartitioningSuite:
[info] - HashPartitioner equality (0 milliseconds)
[info] Test test.org.apache.spark.streaming.Java8APISuite.testFlatMap started
[info] - RangePartitioner equality (75 milliseconds)
[info] - RangePartitioner getPartition (99 milliseconds)
[info] - RangePartitioner for keys that are not Comparable (but with Ordering) (30 milliseconds)
[info] - RangPartitioner.sketch (38 milliseconds)
[info] - RangePartitioner.determineBounds (1 millisecond)
[info] Test test.org.apache.spark.streaming.Java8APISuite.testReduceByKeyAndWindowWithInverse started
[info] Test test.org.apache.spark.streaming.Java8APISuite.testReduceByWindow started
[info] Test test.org.apache.spark.streaming.Java8APISuite.testPairFlatMap started
[info] Test test.org.apache.spark.streaming.Java8APISuite.testPairToPairFlatMapWithChangingTypes started
[info] Test test.org.apache.spark.streaming.Java8APISuite.testPairMapPartitions started
[info] Test test.org.apache.spark.streaming.Java8APISuite.testVariousTransform started
[info] Test test.org.apache.spark.streaming.Java8APISuite.testTransformWith started
[info] - RangePartitioner should run only one job if data is roughly balanced (1 second, 670 milliseconds)
[info] Test test.org.apache.spark.streaming.Java8APISuite.testVariousTransformWith started
[info] Test test.org.apache.spark.streaming.Java8APISuite.testReduceByKeyAndWindow started
[info] Test test.org.apache.spark.streaming.Java8APISuite.testPairTransform started
[info] - user class path first in client mode (20 seconds, 34 milliseconds)
[info] Test test.org.apache.spark.streaming.Java8APISuite.testFilter started
[info] Test test.org.apache.spark.streaming.Java8APISuite.testPairMap2 started
[info] - RangePartitioner should work well on unbalanced data (1 second, 208 milliseconds)
[info] - RangePartitioner should return a single partition for empty RDDs (11 milliseconds)
[info] - HashPartitioner not equal to RangePartitioner (11 milliseconds)
[info] Test test.org.apache.spark.streaming.Java8APISuite.testMapValues started
[info] - partitioner preservation (60 milliseconds)
[info] - partitioning Java arrays should fail (19 milliseconds)
[info] - zero-length partitions should be correctly handled (93 milliseconds)
[info] - Number of elements in RDD is less than number of partitions (10 milliseconds)
[info] - defaultPartitioner (3 milliseconds)
[info] - defaultPartitioner when defaultParallelism is set (4 milliseconds)
[info] Test test.org.apache.spark.streaming.Java8APISuite.testReduce started
[info] PartitionwiseSampledRDDSuite:
[info] - seed distribution (50 milliseconds)
[info] - concurrency (30 milliseconds)
[info] PartitionPruningRDDSuite:
[info] - Pruned Partitions inherit locality prefs correctly (1 millisecond)
[info] - Pruned Partitions can be unioned  (33 milliseconds)
[info] Test test.org.apache.spark.streaming.Java8APISuite.testUpdateStateByKey started
[info] ClosureCleanerSuite2:
[info] - get inner closure classes (8 milliseconds)
[info] - get outer classes and objects (3 milliseconds)
[info] - get outer classes and objects with nesting (5 milliseconds)
[info] - find accessed fields (7 milliseconds)
[info] - find accessed fields with nesting (9 milliseconds)
[info] - clean basic serializable closures (10 milliseconds)
[info] - clean basic non-serializable closures (41 milliseconds)
[info] - clean basic nested serializable closures (8 milliseconds)
[info] - clean basic nested non-serializable closures (39 milliseconds)
[info] - clean complicated nested serializable closures (6 milliseconds)
[info] - clean complicated nested non-serializable closures (50 milliseconds)
[info] - verify nested LMF closures !!! CANCELED !!! (1 millisecond)
[info]   ClosureCleanerSuite2.supportsLMFs was false (ClosureCleanerSuite2.scala:579)
[info]   org.scalatest.exceptions.TestCanceledException:
[info]   at org.scalatest.Assertions$class.newTestCanceledException(Assertions.scala:531)
[info]   at org.scalatest.FunSuite.newTestCanceledException(FunSuite.scala:1560)
[info]   at org.scalatest.Assertions$AssertionsHelper.macroAssume(Assertions.scala:516)
[info]   at org.apache.spark.util.ClosureCleanerSuite2$$anonfun$31.apply$mcV$sp(ClosureCleanerSuite2.scala:579)
[info]   at org.apache.spark.util.ClosureCleanerSuite2$$anonfun$31.apply(ClosureCleanerSuite2.scala:578)
[info]   at org.apache.spark.util.ClosureCleanerSuite2$$anonfun$31.apply(ClosureCleanerSuite2.scala:578)
[info]   at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[info]   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info]   at org.scalatest.Transformer.apply(Transformer.scala:22)
[info]   at org.scalatest.Transformer.apply(Transformer.scala:20)
[info]   at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
[info]   at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:147)
[info]   at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:183)
[info]   at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:196)
[info]   at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:196)
[info]   at org.scalatest.SuperEngine.runTestImpl(Engine.scala:289)
[info]   at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:196)
[info]   at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:54)
[info]   at org.scalatest.BeforeAndAfterEach$class.runTest(BeforeAndAfterEach.scala:221)
[info]   at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:54)
[info]   at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:229)
[info]   at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:229)
[info]   at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:396)
[info]   at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:384)
[info]   at scala.collection.immutable.List.foreach(List.scala:392)
[info]   at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:384)
[info]   at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:379)
[info]   at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:461)
[info]   at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:229)
[info]   at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
[info]   at org.scalatest.Suite$class.run(Suite.scala:1147)
[info]   at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
[info]   at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:233)
[info]   at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:233)
[info]