FailedConsole Output

[EnvInject] - Mask passwords passed as build parameters.
Started by an SCM change
[EnvInject] - Loading node environment variables.
[EnvInject] - Preparing an environment for the build.
[EnvInject] - Keeping Jenkins system variables.
[EnvInject] - Keeping Jenkins build variables.
[EnvInject] - Injecting as environment variables the properties content 
PATH=/home/anaconda/bin:/home/jenkins/.cargo/bin:/home/anaconda/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.1.1/bin/:/home/android-sdk/:/home/jenkins/.cargo/bin:/home/anaconda/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.1.1/bin/:/home/android-sdk/:/usr/local/bin:/bin:/usr/bin
JAVA_HOME=/usr/java/jdk1.8.0_191
JAVA_7_HOME=/usr/java/jdk1.7.0_79
SPARK_BRANCH=branch-2.4
AMPLAB_JENKINS_BUILD_PROFILE=hadoop2.6
AMPLAB_JENKINS="true"
SPARK_TESTING=1
LANG=en_US.UTF-8
SPARK_MASTER_SBT_HADOOP_2_7=1

[EnvInject] - Variables injected successfully.
[EnvInject] - Injecting contributions.
Building remotely on amp-jenkins-worker-06 (centos spark-test) in workspace /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6
 > /home/jenkins/git2/bin/git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > /home/jenkins/git2/bin/git config remote.origin.url https://github.com/apache/spark.git # timeout=10
Fetching upstream changes from https://github.com/apache/spark.git
 > /home/jenkins/git2/bin/git --version # timeout=10
 > /home/jenkins/git2/bin/git fetch --tags --progress https://github.com/apache/spark.git +refs/heads/*:refs/remotes/origin/*
 > /home/jenkins/git2/bin/git rev-parse origin/branch-2.4^{commit} # timeout=10
Checking out Revision 6e0c1162bf95938081e93c77a39b2826fec9c84e (origin/branch-2.4)
 > /home/jenkins/git2/bin/git config core.sparsecheckout # timeout=10
 > /home/jenkins/git2/bin/git checkout -f 6e0c1162bf95938081e93c77a39b2826fec9c84e
 > /home/jenkins/git2/bin/git rev-list 7285eea6839d40cbac15101c633a9a572eb3b603 # timeout=10
[spark-branch-2.4-test-sbt-hadoop-2.6] $ /bin/bash /tmp/hudson463981114632115036.sh
Removing R/lib/
Removing R/pkg/man/
Removing assembly/target/
Removing build/apache-maven-3.5.4/
Removing build/sbt-launch-0.13.17.jar
Removing build/scala-2.11.12/
Removing build/zinc-0.3.15/
Removing common/kvstore/target/
Removing common/network-common/target/
Removing common/network-shuffle/target/
Removing common/network-yarn/target/
Removing common/sketch/target/
Removing common/tags/target/
Removing common/unsafe/target/
Removing core/derby.log
Removing core/dummy/
Removing core/ignored/
Removing core/metastore_db/
Removing core/target/
Removing derby.log
Removing dev/__pycache__/
Removing dev/create-release/__pycache__/
Removing dev/lint-r-report.log
Removing dev/pr-deps/
Removing dev/pycodestyle-2.4.0.py
Removing dev/sparktestsupport/__init__.pyc
Removing dev/sparktestsupport/__pycache__/
Removing dev/sparktestsupport/modules.pyc
Removing dev/sparktestsupport/shellutils.pyc
Removing dev/sparktestsupport/toposort.pyc
Removing dev/target/
Removing examples/src/main/python/__pycache__/
Removing examples/src/main/python/ml/__pycache__/
Removing examples/src/main/python/mllib/__pycache__/
Removing examples/src/main/python/sql/__pycache__/
Removing examples/src/main/python/sql/streaming/__pycache__/
Removing examples/src/main/python/streaming/__pycache__/
Removing examples/target/
Removing external/avro/spark-warehouse/
Removing external/avro/target/
Removing external/flume-assembly/target/
Removing external/flume-sink/target/
Removing external/flume/checkpoint/
Removing external/flume/target/
Removing external/kafka-0-10-assembly/target/
Removing external/kafka-0-10-sql/spark-warehouse/
Removing external/kafka-0-10-sql/target/
Removing external/kafka-0-10/target/
Removing external/kafka-0-8-assembly/target/
Removing external/kafka-0-8/target/
Removing external/kinesis-asl-assembly/target/
Removing external/kinesis-asl/checkpoint/
Removing external/kinesis-asl/src/main/python/examples/streaming/__pycache__/
Removing external/kinesis-asl/target/
Removing external/spark-ganglia-lgpl/target/
Removing graphx/target/
Removing launcher/target/
Removing lib/
Removing logs/
Removing metastore_db/
Removing mllib-local/target/
Removing mllib/checkpoint/
Removing mllib/spark-warehouse/
Removing mllib/target/
Removing project/project/
Removing project/target/
Removing python/__pycache__/
Removing python/docs/__pycache__/
Removing python/docs/_build/
Removing python/docs/epytext.pyc
Removing python/lib/pyspark.zip
Removing python/pyspark/__init__.pyc
Removing python/pyspark/__pycache__/
Removing python/pyspark/_globals.pyc
Removing python/pyspark/accumulators.pyc
Removing python/pyspark/broadcast.pyc
Removing python/pyspark/cloudpickle.pyc
Removing python/pyspark/conf.pyc
Removing python/pyspark/context.pyc
Removing python/pyspark/files.pyc
Removing python/pyspark/find_spark_home.pyc
Removing python/pyspark/heapq3.pyc
Removing python/pyspark/java_gateway.pyc
Removing python/pyspark/join.pyc
Removing python/pyspark/ml/__init__.pyc
Removing python/pyspark/ml/__pycache__/
Removing python/pyspark/ml/base.pyc
Removing python/pyspark/ml/classification.pyc
Removing python/pyspark/ml/clustering.pyc
Removing python/pyspark/ml/common.pyc
Removing python/pyspark/ml/evaluation.pyc
Removing python/pyspark/ml/feature.pyc
Removing python/pyspark/ml/fpm.pyc
Removing python/pyspark/ml/image.pyc
Removing python/pyspark/ml/linalg/__init__.pyc
Removing python/pyspark/ml/linalg/__pycache__/
Removing python/pyspark/ml/param/__init__.pyc
Removing python/pyspark/ml/param/__pycache__/
Removing python/pyspark/ml/param/shared.pyc
Removing python/pyspark/ml/pipeline.pyc
Removing python/pyspark/ml/recommendation.pyc
Removing python/pyspark/ml/regression.pyc
Removing python/pyspark/ml/stat.pyc
Removing python/pyspark/ml/tuning.pyc
Removing python/pyspark/ml/util.pyc
Removing python/pyspark/ml/wrapper.pyc
Removing python/pyspark/mllib/__init__.pyc
Removing python/pyspark/mllib/__pycache__/
Removing python/pyspark/mllib/classification.pyc
Removing python/pyspark/mllib/clustering.pyc
Removing python/pyspark/mllib/common.pyc
Removing python/pyspark/mllib/evaluation.pyc
Removing python/pyspark/mllib/feature.pyc
Removing python/pyspark/mllib/fpm.pyc
Removing python/pyspark/mllib/linalg/__init__.pyc
Removing python/pyspark/mllib/linalg/__pycache__/
Removing python/pyspark/mllib/linalg/distributed.pyc
Removing python/pyspark/mllib/random.pyc
Removing python/pyspark/mllib/recommendation.pyc
Removing python/pyspark/mllib/regression.pyc
Removing python/pyspark/mllib/stat/KernelDensity.pyc
Removing python/pyspark/mllib/stat/__init__.pyc
Removing python/pyspark/mllib/stat/__pycache__/
Removing python/pyspark/mllib/stat/_statistics.pyc
Removing python/pyspark/mllib/stat/distribution.pyc
Removing python/pyspark/mllib/stat/test.pyc
Removing python/pyspark/mllib/tree.pyc
Removing python/pyspark/mllib/util.pyc
Removing python/pyspark/profiler.pyc
Removing python/pyspark/rdd.pyc
Removing python/pyspark/rddsampler.pyc
Removing python/pyspark/resultiterable.pyc
Removing python/pyspark/serializers.pyc
Removing python/pyspark/shuffle.pyc
Removing python/pyspark/sql/__init__.pyc
Removing python/pyspark/sql/__pycache__/
Removing python/pyspark/sql/catalog.pyc
Removing python/pyspark/sql/column.pyc
Removing python/pyspark/sql/conf.pyc
Removing python/pyspark/sql/context.pyc
Removing python/pyspark/sql/dataframe.pyc
Removing python/pyspark/sql/functions.pyc
Removing python/pyspark/sql/group.pyc
Removing python/pyspark/sql/readwriter.pyc
Removing python/pyspark/sql/session.pyc
Removing python/pyspark/sql/streaming.pyc
Removing python/pyspark/sql/types.pyc
Removing python/pyspark/sql/udf.pyc
Removing python/pyspark/sql/utils.pyc
Removing python/pyspark/sql/window.pyc
Removing python/pyspark/statcounter.pyc
Removing python/pyspark/status.pyc
Removing python/pyspark/storagelevel.pyc
Removing python/pyspark/streaming/__init__.pyc
Removing python/pyspark/streaming/__pycache__/
Removing python/pyspark/streaming/context.pyc
Removing python/pyspark/streaming/dstream.pyc
Removing python/pyspark/streaming/flume.pyc
Removing python/pyspark/streaming/kafka.pyc
Removing python/pyspark/streaming/kinesis.pyc
Removing python/pyspark/streaming/listener.pyc
Removing python/pyspark/streaming/util.pyc
Removing python/pyspark/taskcontext.pyc
Removing python/pyspark/traceback_utils.pyc
Removing python/pyspark/util.pyc
Removing python/pyspark/version.pyc
Removing python/test_coverage/__pycache__/
Removing python/test_support/__pycache__/
Removing repl/spark-warehouse/
Removing repl/target/
Removing resource-managers/kubernetes/core/target/
Removing resource-managers/kubernetes/integration-tests/tests/__pycache__/
Removing resource-managers/mesos/target/
Removing resource-managers/yarn/target/
Removing scalastyle-on-compile.generated.xml
Removing spark-warehouse/
Removing sql/__pycache__/
Removing sql/catalyst/loc/
Removing sql/catalyst/target/
Removing sql/core/loc/
Removing sql/core/paris/
Removing sql/core/spark-warehouse/
Removing sql/core/target/
Removing sql/hive-thriftserver/derby.log
Removing sql/hive-thriftserver/metastore_db/
Removing sql/hive-thriftserver/spark-warehouse/
Removing sql/hive-thriftserver/target/
Removing sql/hive/derby.log
Removing sql/hive/loc/
Removing sql/hive/metastore_db/
Removing sql/hive/src/test/resources/data/scripts/__pycache__/
Removing sql/hive/target/
Removing streaming/checkpoint/
Removing streaming/target/
Removing target/
Removing tools/target/
Removing work/
+++ dirname /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/install-dev.sh
++ cd /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R
++ pwd
+ FWDIR=/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R
+ LIB_DIR=/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/lib
+ mkdir -p /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/lib
+ pushd /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R
+ . /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/find-r.sh
++ '[' -z '' ']'
++ '[' '!' -z '' ']'
+++ command -v R
++ '[' '!' /usr/bin/R ']'
++++ which R
+++ dirname /usr/bin/R
++ R_SCRIPT_PATH=/usr/bin
++ echo 'Using R_SCRIPT_PATH = /usr/bin'
Using R_SCRIPT_PATH = /usr/bin
+ . /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/create-rd.sh
++ set -o pipefail
++ set -e
++++ dirname /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/create-rd.sh
+++ cd /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R
+++ pwd
++ FWDIR=/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R
++ pushd /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R
++ . /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/find-r.sh
+++ '[' -z /usr/bin ']'
++ /usr/bin/Rscript -e ' if("devtools" %in% rownames(installed.packages())) { library(devtools); devtools::document(pkg="./pkg", roclets=c("rd")) }'
Updating SparkR documentation
Loading SparkR
Creating a new generic function for ���as.data.frame��� in package ���SparkR���
Creating a new generic function for ���colnames��� in package ���SparkR���
Creating a new generic function for ���colnames<-��� in package ���SparkR���
Creating a new generic function for ���cov��� in package ���SparkR���
Creating a new generic function for ���drop��� in package ���SparkR���
Creating a new generic function for ���na.omit��� in package ���SparkR���
Creating a new generic function for ���filter��� in package ���SparkR���
Creating a new generic function for ���intersect��� in package ���SparkR���
Creating a new generic function for ���sample��� in package ���SparkR���
Creating a new generic function for ���transform��� in package ���SparkR���
Creating a new generic function for ���subset��� in package ���SparkR���
Creating a new generic function for ���summary��� in package ���SparkR���
Creating a new generic function for ���union��� in package ���SparkR���
Creating a new generic function for ���endsWith��� in package ���SparkR���
Creating a new generic function for ���startsWith��� in package ���SparkR���
Creating a new generic function for ���lag��� in package ���SparkR���
Creating a new generic function for ���rank��� in package ���SparkR���
Creating a new generic function for ���sd��� in package ���SparkR���
Creating a new generic function for ���var��� in package ���SparkR���
Creating a new generic function for ���window��� in package ���SparkR���
Creating a new generic function for ���predict��� in package ���SparkR���
Creating a new generic function for ���rbind��� in package ���SparkR���
Creating a generic function for ���substr��� from package ���base��� in package ���SparkR���
Creating a generic function for ���%in%��� from package ���base��� in package ���SparkR���
Creating a generic function for ���lapply��� from package ���base��� in package ���SparkR���
Creating a generic function for ���Filter��� from package ���base��� in package ���SparkR���
Creating a generic function for ���nrow��� from package ���base��� in package ���SparkR���
Creating a generic function for ���ncol��� from package ���base��� in package ���SparkR���
Creating a generic function for ���factorial��� from package ���base��� in package ���SparkR���
Creating a generic function for ���atan2��� from package ���base��� in package ���SparkR���
Creating a generic function for ���ifelse��� from package ���base��� in package ���SparkR���
First time using roxygen2. Upgrading automatically...
Updating roxygen version in /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/pkg/DESCRIPTION
Warning: [/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/pkg/R/SQLContext.R:592] @name May only use one @name per block
Warning: [/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/pkg/R/SQLContext.R:733] @name May only use one @name per block
Writing structType.Rd
Writing print.structType.Rd
Writing structField.Rd
Writing print.structField.Rd
Writing summarize.Rd
Writing alias.Rd
Writing arrange.Rd
Writing as.data.frame.Rd
Writing cache.Rd
Writing checkpoint.Rd
Writing coalesce.Rd
Writing collect.Rd
Writing columns.Rd
Writing coltypes.Rd
Writing count.Rd
Writing cov.Rd
Writing corr.Rd
Writing createOrReplaceTempView.Rd
Writing cube.Rd
Writing dapply.Rd
Writing dapplyCollect.Rd
Writing gapply.Rd
Writing gapplyCollect.Rd
Writing describe.Rd
Writing distinct.Rd
Writing drop.Rd
Writing dropDuplicates.Rd
Writing nafunctions.Rd
Writing dtypes.Rd
Writing explain.Rd
Writing except.Rd
Writing exceptAll.Rd
Writing filter.Rd
Writing first.Rd
Writing groupBy.Rd
Writing hint.Rd
Writing insertInto.Rd
Writing intersect.Rd
Writing intersectAll.Rd
Writing isLocal.Rd
Writing isStreaming.Rd
Writing limit.Rd
Writing localCheckpoint.Rd
Writing merge.Rd
Writing mutate.Rd
Writing orderBy.Rd
Writing persist.Rd
Writing printSchema.Rd
Writing registerTempTable-deprecated.Rd
Writing rename.Rd
Writing repartition.Rd
Writing repartitionByRange.Rd
Writing sample.Rd
Writing rollup.Rd
Writing sampleBy.Rd
Writing saveAsTable.Rd
Writing take.Rd
Writing write.df.Rd
Writing write.jdbc.Rd
Writing write.json.Rd
Writing write.orc.Rd
Writing write.parquet.Rd
Writing write.stream.Rd
Writing write.text.Rd
Writing schema.Rd
Writing select.Rd
Writing selectExpr.Rd
Writing showDF.Rd
Writing subset.Rd
Writing summary.Rd
Writing union.Rd
Writing unionByName.Rd
Writing unpersist.Rd
Writing with.Rd
Writing withColumn.Rd
Writing withWatermark.Rd
Writing randomSplit.Rd
Writing broadcast.Rd
Writing columnfunctions.Rd
Writing between.Rd
Writing cast.Rd
Writing endsWith.Rd
Writing startsWith.Rd
Writing column_nonaggregate_functions.Rd
Writing otherwise.Rd
Writing over.Rd
Writing eq_null_safe.Rd
Writing partitionBy.Rd
Writing rowsBetween.Rd
Writing rangeBetween.Rd
Writing windowPartitionBy.Rd
Writing windowOrderBy.Rd
Writing column_datetime_diff_functions.Rd
Writing column_aggregate_functions.Rd
Writing column_collection_functions.Rd
Writing column_string_functions.Rd
Writing avg.Rd
Writing column_math_functions.Rd
Writing column.Rd
Writing column_misc_functions.Rd
Writing column_window_functions.Rd
Writing column_datetime_functions.Rd
Writing last.Rd
Writing not.Rd
Writing fitted.Rd
Writing predict.Rd
Writing rbind.Rd
Writing spark.als.Rd
Writing spark.bisectingKmeans.Rd
Writing spark.gaussianMixture.Rd
Writing spark.gbt.Rd
Writing spark.glm.Rd
Writing spark.isoreg.Rd
Writing spark.kmeans.Rd
Writing spark.kstest.Rd
Writing spark.lda.Rd
Writing spark.logit.Rd
Writing spark.mlp.Rd
Writing spark.naiveBayes.Rd
Writing spark.decisionTree.Rd
Writing spark.randomForest.Rd
Writing spark.survreg.Rd
Writing spark.svmLinear.Rd
Writing spark.fpGrowth.Rd
Writing write.ml.Rd
Writing awaitTermination.Rd
Writing isActive.Rd
Writing lastProgress.Rd
Writing queryName.Rd
Writing status.Rd
Writing stopQuery.Rd
Writing print.jobj.Rd
Writing show.Rd
Writing substr.Rd
Writing match.Rd
Writing GroupedData.Rd
Writing pivot.Rd
Writing SparkDataFrame.Rd
Writing storageLevel.Rd
Writing toJSON.Rd
Writing nrow.Rd
Writing ncol.Rd
Writing dim.Rd
Writing head.Rd
Writing join.Rd
Writing crossJoin.Rd
Writing attach.Rd
Writing str.Rd
Writing histogram.Rd
Writing getNumPartitions.Rd
Writing sparkR.conf.Rd
Writing sparkR.version.Rd
Writing createDataFrame.Rd
Writing read.json.Rd
Writing read.orc.Rd
Writing read.parquet.Rd
Writing read.text.Rd
Writing sql.Rd
Writing tableToDF.Rd
Writing read.df.Rd
Writing read.jdbc.Rd
Writing read.stream.Rd
Writing WindowSpec.Rd
Writing createExternalTable-deprecated.Rd
Writing createTable.Rd
Writing cacheTable.Rd
Writing uncacheTable.Rd
Writing clearCache.Rd
Writing dropTempTable-deprecated.Rd
Writing dropTempView.Rd
Writing tables.Rd
Writing tableNames.Rd
Writing currentDatabase.Rd
Writing setCurrentDatabase.Rd
Writing listDatabases.Rd
Writing listTables.Rd
Writing listColumns.Rd
Writing listFunctions.Rd
Writing recoverPartitions.Rd
Writing refreshTable.Rd
Writing refreshByPath.Rd
Writing spark.addFile.Rd
Writing spark.getSparkFilesRootDirectory.Rd
Writing spark.getSparkFiles.Rd
Writing spark.lapply.Rd
Writing setLogLevel.Rd
Writing setCheckpointDir.Rd
Writing install.spark.Rd
Writing sparkR.callJMethod.Rd
Writing sparkR.callJStatic.Rd
Writing sparkR.newJObject.Rd
Writing LinearSVCModel-class.Rd
Writing LogisticRegressionModel-class.Rd
Writing MultilayerPerceptronClassificationModel-class.Rd
Writing NaiveBayesModel-class.Rd
Writing BisectingKMeansModel-class.Rd
Writing GaussianMixtureModel-class.Rd
Writing KMeansModel-class.Rd
Writing LDAModel-class.Rd
Writing FPGrowthModel-class.Rd
Writing ALSModel-class.Rd
Writing AFTSurvivalRegressionModel-class.Rd
Writing GeneralizedLinearRegressionModel-class.Rd
Writing IsotonicRegressionModel-class.Rd
Writing glm.Rd
Writing KSTest-class.Rd
Writing GBTRegressionModel-class.Rd
Writing GBTClassificationModel-class.Rd
Writing RandomForestRegressionModel-class.Rd
Writing RandomForestClassificationModel-class.Rd
Writing DecisionTreeRegressionModel-class.Rd
Writing DecisionTreeClassificationModel-class.Rd
Writing read.ml.Rd
Writing sparkR.session.stop.Rd
Writing sparkR.init-deprecated.Rd
Writing sparkRSQL.init-deprecated.Rd
Writing sparkRHive.init-deprecated.Rd
Writing sparkR.session.Rd
Writing sparkR.uiWebUrl.Rd
Writing setJobGroup.Rd
Writing clearJobGroup.Rd
Writing cancelJobGroup.Rd
Writing setJobDescription.Rd
Writing setLocalProperty.Rd
Writing getLocalProperty.Rd
Writing crosstab.Rd
Writing freqItems.Rd
Writing approxQuantile.Rd
Writing StreamingQuery.Rd
Writing hashCode.Rd
+ /usr/bin/R CMD INSTALL --library=/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/lib /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/pkg/
* installing *source* package ���SparkR��� ...
** R
** inst
** byte-compile and prepare package for lazy loading
Creating a new generic function for ���as.data.frame��� in package ���SparkR���
Creating a new generic function for ���colnames��� in package ���SparkR���
Creating a new generic function for ���colnames<-��� in package ���SparkR���
Creating a new generic function for ���cov��� in package ���SparkR���
Creating a new generic function for ���drop��� in package ���SparkR���
Creating a new generic function for ���na.omit��� in package ���SparkR���
Creating a new generic function for ���filter��� in package ���SparkR���
Creating a new generic function for ���intersect��� in package ���SparkR���
Creating a new generic function for ���sample��� in package ���SparkR���
Creating a new generic function for ���transform��� in package ���SparkR���
Creating a new generic function for ���subset��� in package ���SparkR���
Creating a new generic function for ���summary��� in package ���SparkR���
Creating a new generic function for ���union��� in package ���SparkR���
Creating a new generic function for ���endsWith��� in package ���SparkR���
Creating a new generic function for ���startsWith��� in package ���SparkR���
Creating a new generic function for ���lag��� in package ���SparkR���
Creating a new generic function for ���rank��� in package ���SparkR���
Creating a new generic function for ���sd��� in package ���SparkR���
Creating a new generic function for ���var��� in package ���SparkR���
Creating a new generic function for ���window��� in package ���SparkR���
Creating a new generic function for ���predict��� in package ���SparkR���
Creating a new generic function for ���rbind��� in package ���SparkR���
Creating a generic function for ���substr��� from package ���base��� in package ���SparkR���
Creating a generic function for ���%in%��� from package ���base��� in package ���SparkR���
Creating a generic function for ���lapply��� from package ���base��� in package ���SparkR���
Creating a generic function for ���Filter��� from package ���base��� in package ���SparkR���
Creating a generic function for ���nrow��� from package ���base��� in package ���SparkR���
Creating a generic function for ���ncol��� from package ���base��� in package ���SparkR���
Creating a generic function for ���factorial��� from package ���base��� in package ���SparkR���
Creating a generic function for ���atan2��� from package ���base��� in package ���SparkR���
Creating a generic function for ���ifelse��� from package ���base��� in package ���SparkR���
** help
*** installing help indices
  converting help for package ���SparkR���
    finding HTML links ... done
    AFTSurvivalRegressionModel-class        html  
    ALSModel-class                          html  
    BisectingKMeansModel-class              html  
    DecisionTreeClassificationModel-class   html  
    DecisionTreeRegressionModel-class       html  
    FPGrowthModel-class                     html  
    GBTClassificationModel-class            html  
    GBTRegressionModel-class                html  
    GaussianMixtureModel-class              html  
    GeneralizedLinearRegressionModel-class
                                            html  
    GroupedData                             html  
    IsotonicRegressionModel-class           html  
    KMeansModel-class                       html  
    KSTest-class                            html  
    LDAModel-class                          html  
    LinearSVCModel-class                    html  
    LogisticRegressionModel-class           html  
    MultilayerPerceptronClassificationModel-class
                                            html  
    NaiveBayesModel-class                   html  
    RandomForestClassificationModel-class   html  
    RandomForestRegressionModel-class       html  
    SparkDataFrame                          html  
    StreamingQuery                          html  
    WindowSpec                              html  
    alias                                   html  
    approxQuantile                          html  
    arrange                                 html  
    as.data.frame                           html  
    attach                                  html  
    avg                                     html  
    awaitTermination                        html  
    between                                 html  
    broadcast                               html  
    cache                                   html  
    cacheTable                              html  
    cancelJobGroup                          html  
    cast                                    html  
    checkpoint                              html  
    clearCache                              html  
    clearJobGroup                           html  
    coalesce                                html  
    collect                                 html  
    coltypes                                html  
    column                                  html  
    column_aggregate_functions              html  
    column_collection_functions             html  
    column_datetime_diff_functions          html  
    column_datetime_functions               html  
    column_math_functions                   html  
    column_misc_functions                   html  
    column_nonaggregate_functions           html  
    column_string_functions                 html  
    column_window_functions                 html  
    columnfunctions                         html  
    columns                                 html  
    corr                                    html  
    count                                   html  
    cov                                     html  
    createDataFrame                         html  
    createExternalTable-deprecated          html  
    createOrReplaceTempView                 html  
    createTable                             html  
    crossJoin                               html  
    crosstab                                html  
    cube                                    html  
    currentDatabase                         html  
    dapply                                  html  
    dapplyCollect                           html  
    describe                                html  
    dim                                     html  
    distinct                                html  
    drop                                    html  
    dropDuplicates                          html  
    dropTempTable-deprecated                html  
    dropTempView                            html  
    dtypes                                  html  
    endsWith                                html  
    eq_null_safe                            html  
    except                                  html  
    exceptAll                               html  
    explain                                 html  
    filter                                  html  
    first                                   html  
    fitted                                  html  
    freqItems                               html  
    gapply                                  html  
    gapplyCollect                           html  
    getLocalProperty                        html  
    getNumPartitions                        html  
    glm                                     html  
    groupBy                                 html  
    hashCode                                html  
    head                                    html  
    hint                                    html  
    histogram                               html  
    insertInto                              html  
    install.spark                           html  
    intersect                               html  
    intersectAll                            html  
    isActive                                html  
    isLocal                                 html  
    isStreaming                             html  
    join                                    html  
    last                                    html  
    lastProgress                            html  
    limit                                   html  
    listColumns                             html  
    listDatabases                           html  
    listFunctions                           html  
    listTables                              html  
    localCheckpoint                         html  
    match                                   html  
    merge                                   html  
    mutate                                  html  
    nafunctions                             html  
    ncol                                    html  
    not                                     html  
    nrow                                    html  
    orderBy                                 html  
    otherwise                               html  
    over                                    html  
    partitionBy                             html  
    persist                                 html  
    pivot                                   html  
    predict                                 html  
    print.jobj                              html  
    print.structField                       html  
    print.structType                        html  
    printSchema                             html  
    queryName                               html  
    randomSplit                             html  
    rangeBetween                            html  
    rbind                                   html  
    read.df                                 html  
    read.jdbc                               html  
    read.json                               html  
    read.ml                                 html  
    read.orc                                html  
    read.parquet                            html  
    read.stream                             html  
    read.text                               html  
    recoverPartitions                       html  
    refreshByPath                           html  
    refreshTable                            html  
    registerTempTable-deprecated            html  
    rename                                  html  
    repartition                             html  
    repartitionByRange                      html  
    rollup                                  html  
    rowsBetween                             html  
    sample                                  html  
    sampleBy                                html  
    saveAsTable                             html  
    schema                                  html  
    select                                  html  
    selectExpr                              html  
    setCheckpointDir                        html  
    setCurrentDatabase                      html  
    setJobDescription                       html  
    setJobGroup                             html  
    setLocalProperty                        html  
    setLogLevel                             html  
    show                                    html  
    showDF                                  html  
    spark.addFile                           html  
    spark.als                               html  
    spark.bisectingKmeans                   html  
    spark.decisionTree                      html  
    spark.fpGrowth                          html  
    spark.gaussianMixture                   html  
    spark.gbt                               html  
    spark.getSparkFiles                     html  
    spark.getSparkFilesRootDirectory        html  
    spark.glm                               html  
    spark.isoreg                            html  
    spark.kmeans                            html  
    spark.kstest                            html  
    spark.lapply                            html  
    spark.lda                               html  
    spark.logit                             html  
    spark.mlp                               html  
    spark.naiveBayes                        html  
    spark.randomForest                      html  
    spark.survreg                           html  
    spark.svmLinear                         html  
    sparkR.callJMethod                      html  
    sparkR.callJStatic                      html  
    sparkR.conf                             html  
    sparkR.init-deprecated                  html  
    sparkR.newJObject                       html  
    sparkR.session                          html  
    sparkR.session.stop                     html  
    sparkR.uiWebUrl                         html  
    sparkR.version                          html  
    sparkRHive.init-deprecated              html  
    sparkRSQL.init-deprecated               html  
    sql                                     html  
    startsWith                              html  
    status                                  html  
    stopQuery                               html  
    storageLevel                            html  
    str                                     html  
    structField                             html  
    structType                              html  
    subset                                  html  
    substr                                  html  
    summarize                               html  
    summary                                 html  
    tableNames                              html  
    tableToDF                               html  
    tables                                  html  
    take                                    html  
    toJSON                                  html  
    uncacheTable                            html  
    union                                   html  
    unionByName                             html  
    unpersist                               html  
    windowOrderBy                           html  
    windowPartitionBy                       html  
    with                                    html  
    withColumn                              html  
    withWatermark                           html  
    write.df                                html  
    write.jdbc                              html  
    write.json                              html  
    write.ml                                html  
    write.orc                               html  
    write.parquet                           html  
    write.stream                            html  
    write.text                              html  
** building package indices
** installing vignettes
** testing if installed package can be loaded
* DONE (SparkR)
+ cd /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/lib
+ jar cfM /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/lib/sparkr.zip SparkR
+ popd
[info] Using build tool sbt with Hadoop profile hadoop2.6 under environment amplab_jenkins
[info] Found the following changed modules: root
[info] Setup the following environment variables for tests: 

========================================================================
Running Apache RAT checks
========================================================================
Attempting to fetch rat
RAT checks passed.

========================================================================
Running Scala style checks
========================================================================
Scalastyle checks passed.

========================================================================
Running Python style checks
========================================================================
pycodestyle checks passed.
rm -rf _build/*
pydoc checks passed.

========================================================================
Running R style checks
========================================================================

Attaching package: ���SparkR���

The following objects are masked from ���package:stats���:

    cov, filter, lag, na.omit, predict, sd, var, window

The following objects are masked from ���package:base���:

    as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
    rank, rbind, sample, startsWith, subset, summary, transform, union


Attaching package: ���testthat���

The following objects are masked from ���package:SparkR���:

    describe, not

lintr checks passed.

========================================================================
Running build tests
========================================================================
Using `mvn` from path: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/build/apache-maven-3.5.4/bin/mvn
Using `mvn` from path: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/build/apache-maven-3.5.4/bin/mvn
Performing Maven install for hadoop-2.6
Using `mvn` from path: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/build/apache-maven-3.5.4/bin/mvn
Performing Maven validate for hadoop-2.6
Using `mvn` from path: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/build/apache-maven-3.5.4/bin/mvn
Generating dependency manifest for hadoop-2.6
Using `mvn` from path: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/build/apache-maven-3.5.4/bin/mvn
Performing Maven install for hadoop-2.7
Using `mvn` from path: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/build/apache-maven-3.5.4/bin/mvn
Performing Maven validate for hadoop-2.7
Using `mvn` from path: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/build/apache-maven-3.5.4/bin/mvn
Generating dependency manifest for hadoop-2.7
Using `mvn` from path: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/build/apache-maven-3.5.4/bin/mvn
Performing Maven install for hadoop-3.1
Using `mvn` from path: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/build/apache-maven-3.5.4/bin/mvn
Performing Maven validate for hadoop-3.1
Using `mvn` from path: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/build/apache-maven-3.5.4/bin/mvn
Generating dependency manifest for hadoop-3.1
Using `mvn` from path: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/build/apache-maven-3.5.4/bin/mvn
Using `mvn` from path: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/build/apache-maven-3.5.4/bin/mvn

========================================================================
Building Spark
========================================================================
[info] Building Spark (w/Hive 1.2.1) using SBT with these arguments:  -Phadoop-2.6 -Pkubernetes -Phive-thriftserver -Pflume -Pkinesis-asl -Pyarn -Pkafka-0-8 -Pspark-ganglia-lgpl -Phive -Pmesos test:package streaming-kafka-0-8-assembly/assembly streaming-flume-assembly/assembly streaming-kinesis-asl-assembly/assembly
Using /usr/java/jdk1.8.0_191 as default JAVA_HOME.
Note, this will be overridden by -java-home if it is set.
[info] Loading project definition from /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/project
[info] Set current project to spark-parent (in build file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/)
[info] Avro compiler using stringType=CharSequence
[info] Compiling Avro IDL /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-sink/src/main/avro/sparkflume.avdl
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}tags...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}tools...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}spark...
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/tags/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-yarn/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-assembly/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8-assembly/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/unsafe/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/kvstore/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-shuffle/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/launcher/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/assembly/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl-assembly/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/tools/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-common/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-assembly/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/assembly/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-common/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/spark-ganglia-lgpl/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/examples/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-assembly/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8-assembly/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-assembly/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/kvstore/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-shuffle/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-yarn/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/launcher/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl-assembly/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/spark-ganglia-lgpl/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/tools/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/tags/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-sink/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/unsafe/target
[info] Done updating.
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-sink/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/target
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/target/scala-2.11/spark-parent_2.11-2.4.6-SNAPSHOT.jar ...
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/target
[info] Done packaging.
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/target
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/target/scala-2.11/spark-parent_2.11-2.4.6-SNAPSHOT-tests.jar ...
[info] Done packaging.
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib-local/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/kubernetes/core/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/kubernetes/core/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/graphx/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/yarn/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib-local/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/target
[info] Done updating.
[info] Compiling 1 Scala source to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/tools/target/scala-2.11/classes...
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/graphx/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/yarn/target
[info] Done updating.
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}launcher...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}unsafe...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}sketch...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}kvstore...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}mllib-local...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-flume-sink...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}network-common...
[info] Compiling 2 Scala sources and 6 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/tags/target/scala-2.11/classes...
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/examples/target
[info] Done updating.
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/streaming/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/streaming/target
[info] Done updating.
[info] Done updating.
[info] Done updating.
[info] Done updating.
[info] Done updating.
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}network-shuffle...
[info] Compiling 77 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-common/target/scala-2.11/classes...
[info] Done updating.
[info] Done updating.
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive/target
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}core...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}network-yarn...
[info] Done updating.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/tags/target/scala-2.11/spark-tags_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 5 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib-local/target/scala-2.11/classes...
[info] Compiling 3 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/tags/target/scala-2.11/test-classes...
[info] Compiling 20 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/launcher/target/scala-2.11/classes...
[info] Compiling 9 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/target/scala-2.11/classes...
[info] Compiling 16 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/unsafe/target/scala-2.11/classes...
[info] Compiling 6 Scala sources and 3 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-sink/target/scala-2.11/classes...
[info] Compiling 12 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/kvstore/target/scala-2.11/classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-common/target/scala-2.11/spark-network-common_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 24 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-shuffle/target/scala-2.11/classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/tags/target/scala-2.11/spark-tags_2.11-2.4.6-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Compiling 21 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-common/target/scala-2.11/test-classes...
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/src/main/java/org/apache/spark/util/sketch/Platform.java:22:  Unsafe is internal proprietary API and may be removed in a future release
[warn] import sun.misc.Unsafe;
[warn]                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/src/main/java/org/apache/spark/util/sketch/Platform.java:28:  Unsafe is internal proprietary API and may be removed in a future release
[warn]   private static final Unsafe _UNSAFE;
[warn]                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/src/main/java/org/apache/spark/util/sketch/Platform.java:150:  Unsafe is internal proprietary API and may be removed in a future release
[warn]     sun.misc.Unsafe unsafe;
[warn]             ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/src/main/java/org/apache/spark/util/sketch/Platform.java:152:  Unsafe is internal proprietary API and may be removed in a future release
[warn]       Field unsafeField = Unsafe.class.getDeclaredField("theUnsafe");
[warn]                           ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/src/main/java/org/apache/spark/util/sketch/Platform.java:154:  Unsafe is internal proprietary API and may be removed in a future release
[warn]       unsafe = (sun.misc.Unsafe) unsafeField.get(null);
[warn]                         ^
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/target/scala-2.11/spark-sketch_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 3 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/target/scala-2.11/test-classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/kvstore/target/scala-2.11/spark-kvstore_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 10 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/kvstore/target/scala-2.11/test-classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/unsafe/target/scala-2.11/spark-unsafe_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 1 Scala source and 5 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/unsafe/target/scala-2.11/test-classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/launcher/target/scala-2.11/spark-launcher_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 7 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/launcher/target/scala-2.11/test-classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-shuffle/target/scala-2.11/spark-network-shuffle_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 2 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-yarn/target/scala-2.11/classes...
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-common/src/test/java/org/apache/spark/network/server/OneForOneStreamManagerSuite.java:61:  [unchecked] unchecked generic array creation for varargs parameter of type Class<? extends Throwable>[]
[warn]     Mockito.when(buffers.next()).thenThrow(RuntimeException.class);
[warn]                                           ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-common/src/test/java/org/apache/spark/network/server/OneForOneStreamManagerSuite.java:68:  [unchecked] unchecked generic array creation for varargs parameter of type Class<? extends Throwable>[]
[warn]     Mockito.when(buffers2.next()).thenReturn(mockManagedBuffer).thenThrow(RuntimeException.class);
[warn]                                                                          ^
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/tools/target/scala-2.11/spark-tools_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/tools/target/scala-2.11/spark-tools_2.11-2.4.6-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/kvstore/target/scala-2.11/spark-kvstore_2.11-2.4.6-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-yarn/target/scala-2.11/spark-network-yarn_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-yarn/target/scala-2.11/spark-network-yarn_2.11-2.4.6-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-common/target/scala-2.11/spark-network-common_2.11-2.4.6-SNAPSHOT-tests.jar ...
[info] Compiling 13 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-shuffle/target/scala-2.11/test-classes...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/launcher/target/scala-2.11/spark-launcher_2.11-2.4.6-SNAPSHOT-tests.jar ...
[info] Done packaging.
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/catalyst/target
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/target
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-shuffle/target/scala-2.11/spark-network-shuffle_2.11-2.4.6-SNAPSHOT-tests.jar ...
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/catalyst/target
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/target/scala-2.11/spark-sketch_2.11-2.4.6-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-sink/target/scala-2.11/src_managed/main/compiled_avro/org/apache/spark/streaming/flume/sink/EventBatch.java:243:  [unchecked] unchecked cast
[warn]         record.events = fieldSetFlags()[2] ? this.events : (java.util.List<org.apache.spark.streaming.flume.sink.SparkSinkEvent>) defaultValue(fields()[2]);
[warn]                                                                                                                                               ^
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-sink/target/scala-2.11/spark-streaming-flume-sink_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 1 Scala source to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-sink/target/scala-2.11/test-classes...
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.6-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}catalyst...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}kubernetes...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}yarn...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}mesos...
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}graphx...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}ganglia-lgpl...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming...
[info] Compiling 494 Scala sources and 81 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-sink/target/scala-2.11/spark-streaming-flume-sink_2.11-2.4.6-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/unsafe/target/scala-2.11/spark-unsafe_2.11-2.4.6-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib-local/target/scala-2.11/spark-mllib-local_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 10 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib-local/target/scala-2.11/test-classes...
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.6-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/target
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.6-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}sql...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib-local/target/scala-2.11/spark-mllib-local_2.11-2.4.6-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.6-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.6-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.6-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.6-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-kinesis-asl...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-flume...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-kafka-0-10...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-kafka-0-8...
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.6-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:161: method isRunningLocally in class TaskContext is deprecated: Local execution was removed, so this always returns false
[warn]   override def isRunningLocally(): Boolean = taskContext.isRunningLocally()
[warn]                                                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/api/r/RBackend.scala:102: method childGroup in class ServerBootstrap is deprecated: see corresponding Javadoc for more information.
[warn]     if (bootstrap != null && bootstrap.childGroup() != null) {
[warn]                                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/scheduler/StageInfo.scala:59: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]   def attemptNumber(): Int = attemptId
[warn]                              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn]                             ^
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.5.12.Final, 3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.6-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 	    +- org.apache.flume:flume-ng-core:1.6.0               (depends on 3.5.12.Final)
[warn] 	    +- org.apache.flume:flume-ng-sdk:1.6.0                (depends on 3.5.12.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.6-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-kafka-0-8-assembly...
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.6-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-kafka-0-10-assembly...
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.6-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-kinesis-asl-assembly...
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9
[warn] 	    +- org.apache.arrow:arrow-vector:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.arrow:arrow-memory:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.6-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-unsafe_2.11:2.4.6-SNAPSHOT  (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-network-common_2.11:2.4.6-SNAPSHOT (depends on 1.3.9)
[warn] 	    +- org.apache.hadoop:hadoop-common:2.6.5              (depends on 1.3.9)
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.6-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}hive...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}avro...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}sql-kafka-0-10...
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.7.0.Final, 3.6.2.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.6-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.7.0.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[warn] four warnings found
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.7.0.Final, 3.6.2.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.6-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.7.0.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:161: method isRunningLocally in class TaskContext is deprecated: Local execution was removed, so this always returns false
[warn]   override def isRunningLocally(): Boolean = taskContext.isRunningLocally()
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/scheduler/StageInfo.scala:59: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]   def attemptNumber(): Int = attemptId
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/api/r/RBackend.scala:102: method childGroup in class ServerBootstrap is deprecated: see corresponding Javadoc for more information.
[warn]     if (bootstrap != null && bootstrap.childGroup() != null) {
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/spark-core_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 20 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/target/scala-2.11/classes...
[info] Compiling 1 Scala source to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/spark-ganglia-lgpl/target/scala-2.11/classes...
[info] Compiling 103 Scala sources and 6 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/streaming/target/scala-2.11/classes...
[info] Compiling 239 Scala sources and 26 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/test-classes...
[info] Compiling 38 Scala sources and 5 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/graphx/target/scala-2.11/classes...
[info] Compiling 36 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/kubernetes/core/target/scala-2.11/classes...
[info] Compiling 26 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/yarn/target/scala-2.11/classes...
[info] Compiling 240 Scala sources and 31 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/catalyst/target/scala-2.11/classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/spark-ganglia-lgpl/target/scala-2.11/spark-ganglia-lgpl_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/spark-ganglia-lgpl/target/scala-2.11/spark-ganglia-lgpl_2.11-2.4.6-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.7.0.Final, 3.6.2.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.6-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.7.0.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:87: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       fwInfoBuilder.setRole(role)
[warn]                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:224: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]     role.foreach { r => builder.setRole(r) }
[warn]                                 ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:260: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]             Option(r.getRole), reservation)
[warn]                      ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:263: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]             Option(r.getRole), reservation)
[warn]                      ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:521: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       role.foreach { r => builder.setRole(r) }
[warn]                                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:537: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]         (RoleResourceInfo(resource.getRole, reservation),
[warn]                                    ^
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/yarn/target/scala-2.11/spark-yarn_2.11-2.4.6-SNAPSHOT.jar ...
[warn] 6 warnings found
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:87: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       fwInfoBuilder.setRole(role)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:224: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]     role.foreach { r => builder.setRole(r) }
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:260: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]             Option(r.getRole), reservation)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:263: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]             Option(r.getRole), reservation)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:521: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       role.foreach { r => builder.setRole(r) }
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:537: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]         (RoleResourceInfo(resource.getRole, reservation),
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/target/scala-2.11/spark-mesos_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/kubernetes/core/target/scala-2.11/spark-kubernetes_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/graphx/target/scala-2.11/spark-graphx_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9
[warn] 	    +- org.apache.arrow:arrow-vector:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.arrow:arrow-memory:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.spark:spark-hive_2.11:2.4.6-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.6-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-unsafe_2.11:2.4.6-SNAPSHOT  (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-network-common_2.11:2.4.6-SNAPSHOT (depends on 1.3.9)
[warn] 	    +- org.apache.hadoop:hadoop-common:2.6.5              (depends on 1.3.9)
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.6-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/streaming/target/scala-2.11/spark-streaming_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 10 Scala sources and 1 Java source to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/target/scala-2.11/classes...
[info] Compiling 11 Scala sources and 2 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/target/scala-2.11/classes...
[info] Compiling 11 Scala sources and 1 Java source to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/target/scala-2.11/classes...
[info] Compiling 10 Scala sources and 2 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/target/scala-2.11/classes...
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumeEventCount.scala:59: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createStream(ssc, host, port, StorageLevel.MEMORY_ONLY_SER_2)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createPollingStream(ssc, host, port)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:259: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val dstream = FlumeUtils.createStream(jssc, hostname, port, storageLevel, enableDecompression)
[warn]                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:275: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val dstream = FlumeUtils.createPollingStream(
[warn]                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:100: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:153: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/DirectKafkaInputDStream.scala:172: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]     val msgs = c.poll(0)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaDataConsumer.scala:200: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     val p = consumer.poll(timeout)
[warn]                      ^
[warn] four warnings found
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:107: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val kinesisClient = new AmazonKinesisClient(credentials)
[warn]                         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:108: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     kinesisClient.setEndpoint(endpointUrl)
[warn]                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:223: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val kinesisClient = new AmazonKinesisClient(new DefaultAWSCredentialsProviderChain())
[warn]                         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:224: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     kinesisClient.setEndpoint(endpoint)
[warn]                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:140: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]   private val client = new AmazonKinesisClient(credentials)
[warn]                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:150: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]   client.setEndpoint(endpointUrl)
[warn]          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:219: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information.
[warn]     getRecordsRequest.setRequestCredentials(credentials)
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:240: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information.
[warn]     getShardIteratorRequest.setRequestCredentials(credentials)
[warn]                             ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisReceiver.scala:187: constructor Worker in class Worker is deprecated: see corresponding Javadoc for more information.
[warn]     worker = new Worker(recordProcessorFactory, kinesisClientLibConfiguration)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:58: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val client = new AmazonKinesisClient(KinesisTestUtils.getAWSCredentials())
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:59: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     client.setEndpoint(endpointUrl)
[warn]            ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:64: constructor AmazonDynamoDBClient in class AmazonDynamoDBClient is deprecated: see corresponding Javadoc for more information.
[warn]     val dynamoDBClient = new AmazonDynamoDBClient(new DefaultAWSCredentialsProviderChain())
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:65: method setRegion in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     dynamoDBClient.setRegion(RegionUtils.getRegion(regionName))
[warn]                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:606: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]       KinesisUtils.createStream(jssc.ssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn]                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:613: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]         KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn]                      ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:616: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]         KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn]                      ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/SparkAWSCredentials.scala:76: method withLongLivedCredentialsProvider in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       .withLongLivedCredentialsProvider(longLivedCreds.provider)
[warn]        ^
[warn] four warnings found
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:259: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val dstream = FlumeUtils.createStream(jssc, hostname, port, storageLevel, enableDecompression)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:275: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val dstream = FlumeUtils.createPollingStream(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createPollingStream(ssc, host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createPollingStream(ssc, host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumeEventCount.scala:59: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createStream(ssc, host, port, StorageLevel.MEMORY_ONLY_SER_2)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/target/scala-2.11/spark-streaming-flume_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[warn] 17 warnings found
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:89: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   protected val kc = new KafkaCluster(kafkaParams)
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:172: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       OffsetRange(tp.topic, tp.partition, fo, uo.offset)
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:217: object KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val leaders = KafkaCluster.checkErrors(kc.findLeaders(topics))
[warn]                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:222: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]            context.sparkContext, kafkaParams, b.map(OffsetRange(_)), leaders, messageHandler)
[warn]                                                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:58: trait HasOffsetRanges in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   ) extends RDD[R](sc, Nil) with Logging with HasOffsetRanges {
[warn]                                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val offsetRanges: Array[OffsetRange],
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:152: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val kc = new KafkaCluster(kafkaParams)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:268: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]         OffsetRange(tp.topic, tp.partition, fo, uo.offset)
[warn]         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:633: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder](
[warn]     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:647: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn]                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:648: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker]): JavaRDD[(Array[Byte], Array[Byte])] = {
[warn]                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:657: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn]                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:658: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker]): JavaRDD[Array[Byte]] = {
[warn]                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:670: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn]                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:671: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker],
[warn]                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:673: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createRDD[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V](
[warn]     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:676: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges.toArray(new Array[OffsetRange](offsetRanges.size())),
[warn]                                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:720: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val kc = new KafkaCluster(Map(kafkaParams.asScala.toSeq: _*))
[warn]                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:721: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       KafkaUtils.getFromOffsets(
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:725: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createDirectStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V](
[warn]     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset)
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset)
[warn]                      ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def createBroker(host: String, port: JInt): Broker = Broker(host, port)
[warn]                                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: object Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def createBroker(host: String, port: JInt): Broker = Broker(host, port)
[warn]                                                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:740: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def offsetRangesOfKafkaRDD(rdd: RDD[_]): JList[OffsetRange] = {
[warn]                                            ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/DirectKafkaInputDStream.scala:172: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]     val msgs = c.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:100: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:153: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaDataConsumer.scala:200: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     val p = consumer.poll(timeout)
[warn] 
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/target/scala-2.11/spark-streaming-kafka-0-10_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-assembly/target/scala-2.11/spark-streaming-kafka-0-10-assembly_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-assembly/target/scala-2.11/spark-streaming-kafka-0-10-assembly_2.11-2.4.6-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9
[warn] 	    +- org.apache.arrow:arrow-vector:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.arrow:arrow-memory:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.6-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-unsafe_2.11:2.4.6-SNAPSHOT  (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-network-common_2.11:2.4.6-SNAPSHOT (depends on 1.3.9)
[warn] 	    +- org.apache.hadoop:hadoop-common:2.6.5              (depends on 1.3.9)
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.6-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Note: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/java/org/apache/spark/examples/streaming/JavaKinesisWordCountASL.java uses or overrides a deprecated API.
[info] Note: Recompile with -Xlint:deprecation for details.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:140: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]   private val client = new AmazonKinesisClient(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:150: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]   client.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:219: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information.
[warn]     getRecordsRequest.setRequestCredentials(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:240: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information.
[warn]     getShardIteratorRequest.setRequestCredentials(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:606: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]       KinesisUtils.createStream(jssc.ssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:613: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]         KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:616: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]         KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:107: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val kinesisClient = new AmazonKinesisClient(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:108: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     kinesisClient.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:223: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val kinesisClient = new AmazonKinesisClient(new DefaultAWSCredentialsProviderChain())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:224: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     kinesisClient.setEndpoint(endpoint)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/SparkAWSCredentials.scala:76: method withLongLivedCredentialsProvider in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       .withLongLivedCredentialsProvider(longLivedCreds.provider)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:58: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val client = new AmazonKinesisClient(KinesisTestUtils.getAWSCredentials())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:59: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     client.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:64: constructor AmazonDynamoDBClient in class AmazonDynamoDBClient is deprecated: see corresponding Javadoc for more information.
[warn]     val dynamoDBClient = new AmazonDynamoDBClient(new DefaultAWSCredentialsProviderChain())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:65: method setRegion in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     dynamoDBClient.setRegion(RegionUtils.getRegion(regionName))
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisReceiver.scala:187: constructor Worker in class Worker is deprecated: see corresponding Javadoc for more information.
[warn]     worker = new Worker(recordProcessorFactory, kinesisClientLibConfiguration)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/target/scala-2.11/spark-streaming-kinesis-asl_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl-assembly/target/scala-2.11/spark-streaming-kinesis-asl-assembly_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl-assembly/target/scala-2.11/spark-streaming-kinesis-asl-assembly_2.11-2.4.6-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] 25 warnings found
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:633: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder](
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:647: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:648: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker]): JavaRDD[(Array[Byte], Array[Byte])] = {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:657: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:658: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker]): JavaRDD[Array[Byte]] = {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:670: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:671: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:673: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createRDD[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V](
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:676: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges.toArray(new Array[OffsetRange](offsetRanges.size())),
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:720: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val kc = new KafkaCluster(Map(kafkaParams.asScala.toSeq: _*))
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:721: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       KafkaUtils.getFromOffsets(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:725: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createDirectStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V](
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def createBroker(host: String, port: JInt): Broker = Broker(host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: object Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def createBroker(host: String, port: JInt): Broker = Broker(host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:740: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def offsetRangesOfKafkaRDD(rdd: RDD[_]): JList[OffsetRange] = {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:89: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   protected val kc = new KafkaCluster(kafkaParams)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:172: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       OffsetRange(tp.topic, tp.partition, fo, uo.offset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:217: object KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val leaders = KafkaCluster.checkErrors(kc.findLeaders(topics))
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:222: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]            context.sparkContext, kafkaParams, b.map(OffsetRange(_)), leaders, messageHandler)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:58: trait HasOffsetRanges in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   ) extends RDD[R](sc, Nil) with Logging with HasOffsetRanges {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val offsetRanges: Array[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val offsetRanges: Array[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:152: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val kc = new KafkaCluster(kafkaParams)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:268: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]         OffsetRange(tp.topic, tp.partition, fo, uo.offset)
[warn] 
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/target/scala-2.11/spark-streaming-kafka-0-8_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8-assembly/target/scala-2.11/spark-streaming-kafka-0-8-assembly_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8-assembly/target/scala-2.11/spark-streaming-kafka-0-8-assembly_2.11-2.4.6-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9
[warn] 	    +- org.apache.arrow:arrow-vector:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.arrow:arrow-memory:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.6-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-unsafe_2.11:2.4.6-SNAPSHOT  (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-network-common_2.11:2.4.6-SNAPSHOT (depends on 1.3.9)
[warn] 	    +- org.apache.hadoop:hadoop-common:2.6.5              (depends on 1.3.9)
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.6-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}hive-thriftserver...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}mllib...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-flume-assembly...
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.5.12.Final, 3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.6-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 	    +- org.apache.flume:flume-ng-core:1.6.0               (depends on 3.5.12.Final)
[warn] 	    +- org.apache.flume:flume-ng-sdk:1.6.0                (depends on 3.5.12.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-assembly/target/scala-2.11/spark-streaming-flume-assembly_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-assembly/target/scala-2.11/spark-streaming-flume-assembly_2.11-2.4.6-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/util/collection/ExternalAppendOnlyMapSuite.scala:461: postfix operator seconds should be enabled
[warn] by making the implicit value scala.language.postfixOps visible.
[warn] This can be achieved by adding the import clause 'import scala.language.postfixOps'
[warn] or by setting the compiler option -language:postfixOps.
[warn] See the Scaladoc for value scala.language.postfixOps for a discussion
[warn] why the feature should be explicitly enabled.
[warn]     eventually(timeout(5 seconds), interval(200 milliseconds)) {
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/util/collection/ExternalAppendOnlyMapSuite.scala:461: postfix operator milliseconds should be enabled
[warn] by making the implicit value scala.language.postfixOps visible.
[warn]     eventually(timeout(5 seconds), interval(200 milliseconds)) {
[warn]                                                 ^
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9
[warn] 	    +- org.apache.arrow:arrow-vector:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.arrow:arrow-memory:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.6-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-unsafe_2.11:2.4.6-SNAPSHOT  (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-network-common_2.11:2.4.6-SNAPSHOT (depends on 1.3.9)
[warn] 	    +- org.apache.hadoop:hadoop-common:2.6.5              (depends on 1.3.9)
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.6-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}repl...
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:48: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]   implicit def setAccum[A]: AccumulableParam[mutable.Set[A], A] =
[warn]                             ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:49: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     new AccumulableParam[mutable.Set[A], A] {
[warn]         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:86: class Accumulator in package spark is deprecated: use AccumulatorV2
[warn]     val acc: Accumulator[Int] = sc.accumulator(0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:86: method accumulator in class SparkContext is deprecated: use AccumulatorV2
[warn]     val acc: Accumulator[Int] = sc.accumulator(0)
[warn]                                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:86: object IntAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2
[warn]     val acc: Accumulator[Int] = sc.accumulator(0)
[warn]                                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:92: method accumulator in class SparkContext is deprecated: use AccumulatorV2
[warn]     val longAcc = sc.accumulator(0L)
[warn]                      ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:92: object LongAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2
[warn]     val longAcc = sc.accumulator(0L)
[warn]                                 ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:100: class Accumulator in package spark is deprecated: use AccumulatorV2
[warn]     val acc: Accumulator[Int] = sc.accumulator(0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:100: method accumulator in class SparkContext is deprecated: use AccumulatorV2
[warn]     val acc: Accumulator[Int] = sc.accumulator(0)
[warn]                                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:100: object IntAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2
[warn]     val acc: Accumulator[Int] = sc.accumulator(0)
[warn]                                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:112: class Accumulable in package spark is deprecated: use AccumulatorV2
[warn]       val acc: Accumulable[mutable.Set[Any], Any] = sc.accumulable(new mutable.HashSet[Any]())
[warn]                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:112: method accumulable in class SparkContext is deprecated: use AccumulatorV2
[warn]       val acc: Accumulable[mutable.Set[Any], Any] = sc.accumulable(new mutable.HashSet[Any]())
[warn]                                                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:129: class Accumulable in package spark is deprecated: use AccumulatorV2
[warn]       val acc: Accumulable[mutable.Set[Any], Any] = sc.accumulable(new mutable.HashSet[Any]())
[warn]                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:129: method accumulable in class SparkContext is deprecated: use AccumulatorV2
[warn]       val acc: Accumulable[mutable.Set[Any], Any] = sc.accumulable(new mutable.HashSet[Any]())
[warn]                                                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:145: method accumulableCollection in class SparkContext is deprecated: use AccumulatorV2
[warn]       val setAcc = sc.accumulableCollection(mutable.HashSet[Int]())
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:146: method accumulableCollection in class SparkContext is deprecated: use AccumulatorV2
[warn]       val bufferAcc = sc.accumulableCollection(mutable.ArrayBuffer[Int]())
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:147: method accumulableCollection in class SparkContext is deprecated: use AccumulatorV2
[warn]       val mapAcc = sc.accumulableCollection(mutable.HashMap[Int, String]())
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:170: class Accumulable in package spark is deprecated: use AccumulatorV2
[warn]       val acc: Accumulable[mutable.Set[Any], Any] = sc.accumulable(new mutable.HashSet[Any]())
[warn]                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:170: method accumulable in class SparkContext is deprecated: use AccumulatorV2
[warn]       val acc: Accumulable[mutable.Set[Any], Any] = sc.accumulable(new mutable.HashSet[Any]())
[warn]                                                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:184: class Accumulable in package spark is deprecated: use AccumulatorV2
[warn]     var acc: Accumulable[mutable.Set[Any], Any] = sc.accumulable(new mutable.HashSet[Any]())
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:184: method accumulable in class SparkContext is deprecated: use AccumulatorV2
[warn]     var acc: Accumulable[mutable.Set[Any], Any] = sc.accumulable(new mutable.HashSet[Any]())
[warn]                                                      ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:225: class Accumulator in package spark is deprecated: use AccumulatorV2
[warn]     val acc = new Accumulator("", StringAccumulatorParam, Some("darkness"))
[warn]                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:225: object StringAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2
[warn]     val acc = new Accumulator("", StringAccumulatorParam, Some("darkness"))
[warn]                                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/status/AppStatusListenerSuite.scala:1194: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]       SparkListenerTaskEnd(stage1.stageId, stage1.attemptId, "taskType", Success, tasks(1), null))
[warn]                                                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/status/AppStatusListenerSuite.scala:1198: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]       SparkListenerTaskEnd(stage1.stageId, stage1.attemptId, "taskType", Success, tasks(0), null))
[warn]                                                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/status/AppStatusListenerSuite.scala:1264: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]       SparkListenerTaskEnd(stage1.stageId, stage1.attemptId, "taskType", Success, tasks(0), null))
[warn]                                                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/status/AppStatusListenerSuite.scala:1278: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]       SparkListenerTaskEnd(stage1.stageId, stage1.attemptId, "taskType",
[warn]                                                   ^
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9
[warn] 	    +- org.apache.arrow:arrow-vector:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.arrow:arrow-memory:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.spark:spark-hive_2.11:2.4.6-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.6-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-unsafe_2.11:2.4.6-SNAPSHOT  (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-network-common_2.11:2.4.6-SNAPSHOT (depends on 1.3.9)
[warn] 	    +- org.apache.hadoop:hadoop-common:2.6.5              (depends on 1.3.9)
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.6-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/util/AccumulatorV2Suite.scala:132: object StringAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2
[warn]     val acc = new LegacyAccumulatorWrapper("default", AccumulatorParam.StringAccumulatorParam)
[warn]                                                                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/util/AccumulatorV2Suite.scala:132: object AccumulatorParam in package spark is deprecated: use AccumulatorV2
[warn]     val acc = new LegacyAccumulatorWrapper("default", AccumulatorParam.StringAccumulatorParam)
[warn]                                                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/util/AccumulatorV2Suite.scala:168: trait AccumulatorParam in package spark is deprecated: use AccumulatorV2
[warn]     val param = new AccumulatorParam[MyData] {
[warn]                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/sparktest/ImplicitSuite.scala:79: method accumulator in class SparkContext is deprecated: use AccumulatorV2
[warn]     sc.accumulator(123.4)
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/sparktest/ImplicitSuite.scala:79: object DoubleAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2
[warn]     sc.accumulator(123.4)
[warn]                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/sparktest/ImplicitSuite.scala:84: method accumulator in class SparkContext is deprecated: use AccumulatorV2
[warn]     sc.accumulator(123)
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/sparktest/ImplicitSuite.scala:84: object IntAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2
[warn]     sc.accumulator(123)
[warn]                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/sparktest/ImplicitSuite.scala:89: method accumulator in class SparkContext is deprecated: use AccumulatorV2
[warn]     sc.accumulator(123L)
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/sparktest/ImplicitSuite.scala:89: object LongAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2
[warn]     sc.accumulator(123L)
[warn]                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/sparktest/ImplicitSuite.scala:94: method accumulator in class SparkContext is deprecated: use AccumulatorV2
[warn]     sc.accumulator(123F)
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/sparktest/ImplicitSuite.scala:94: object FloatAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2
[warn]     sc.accumulator(123F)
[warn]                   ^
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9
[warn] 	    +- org.apache.arrow:arrow-vector:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.arrow:arrow-memory:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.6-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-unsafe_2.11:2.4.6-SNAPSHOT  (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-network-common_2.11:2.4.6-SNAPSHOT (depends on 1.3.9)
[warn] 	    +- org.apache.hadoop:hadoop-common:2.6.5              (depends on 1.3.9)
[warn] 
[warn] 	* org.scala-lang.modules:scala-parser-combinators_2.11:1.1.0 is selected over 1.0.4
[warn] 	    +- org.apache.spark:spark-catalyst_2.11:2.4.6-SNAPSHOT (depends on 1.1.0)
[warn] 	    +- org.apache.spark:spark-mllib_2.11:2.4.6-SNAPSHOT   (depends on 1.1.0)
[warn] 	    +- org.scala-lang:scala-compiler:2.11.12              (depends on 1.0.4)
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.6-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}examples...
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}assembly...
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9
[warn] 	    +- org.apache.arrow:arrow-vector:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.arrow:arrow-memory:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.spark:spark-hive_2.11:2.4.6-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.6-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-unsafe_2.11:2.4.6-SNAPSHOT  (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-network-common_2.11:2.4.6-SNAPSHOT (depends on 1.3.9)
[warn] 	    +- org.apache.hadoop:hadoop-common:2.6.5              (depends on 1.3.9)
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.6-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/catalyst/target/scala-2.11/spark-catalyst_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 
[warn] 	* com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9
[warn] 	    +- org.apache.arrow:arrow-vector:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.arrow:arrow-memory:0.10.0               (depends on 3.0.2)
[warn] 	    +- org.apache.spark:spark-hive_2.11:2.4.6-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.6-SNAPSHOT    (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-unsafe_2.11:2.4.6-SNAPSHOT  (depends on 1.3.9)
[warn] 	    +- org.apache.spark:spark-network-common_2.11:2.4.6-SNAPSHOT (depends on 1.3.9)
[warn] 	    +- org.apache.hadoop:hadoop-common:2.6.5              (depends on 1.3.9)
[warn] 
[warn] 	* org.scala-lang.modules:scala-parser-combinators_2.11:1.1.0 is selected over 1.0.4
[warn] 	    +- org.apache.spark:spark-catalyst_2.11:2.4.6-SNAPSHOT (depends on 1.0.4)
[warn] 	    +- org.scala-lang:scala-compiler:2.11.12              (depends on 1.0.4)
[warn] 	    +- org.apache.spark:spark-mllib_2.11:2.4.6-SNAPSHOT   (depends on 1.0.4)
[warn] 
[warn] 	* io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final}
[warn] 	    +- org.apache.spark:spark-core_2.11:2.4.6-SNAPSHOT    (depends on 3.9.9.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.6               (depends on 3.6.2.Final)
[warn] 	    +- org.apache.hadoop:hadoop-hdfs:2.6.5                (depends on 3.6.2.Final)
[warn] 
[warn] Run 'evicted' to see detailed eviction warnings
[info] Done packaging.
[info] Compiling 346 Scala sources and 93 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/target/scala-2.11/classes...
[warn] 40 warnings found
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:128: value ENABLE_JOB_SUMMARY in object ParquetOutputFormat is deprecated: see corresponding Javadoc for more information.
[warn]       && conf.get(ParquetOutputFormat.ENABLE_JOB_SUMMARY) == null) {
[warn]                                       ^
[info] Note: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/java/test/org/apache/spark/JavaAPISuite.java uses or overrides a deprecated API.
[info] Note: Recompile with -Xlint:deprecation for details.
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Compiling 3 Scala sources and 3 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/target/scala-2.11/test-classes...
[info] Compiling 20 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/graphx/target/scala-2.11/test-classes...
[info] Compiling 5 Scala sources and 3 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/target/scala-2.11/test-classes...
[info] Compiling 23 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/yarn/target/scala-2.11/test-classes...
[info] Compiling 6 Scala sources and 4 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/target/scala-2.11/test-classes...
[info] Compiling 40 Scala sources and 9 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/streaming/target/scala-2.11/test-classes...
[info] Compiling 10 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/target/scala-2.11/test-classes...
[info] Compiling 28 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/kubernetes/core/target/scala-2.11/test-classes...
[info] Compiling 200 Scala sources and 5 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/catalyst/target/scala-2.11/test-classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/spark-core_2.11-2.4.6-SNAPSHOT-tests.jar ...
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/test/scala/org/apache/spark/streaming/flume/FlumePollingStreamSuite.scala:117: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]       FlumeUtils.createPollingStream(ssc, addresses, StorageLevel.MEMORY_AND_DISK,
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/test/scala/org/apache/spark/streaming/flume/FlumeStreamSuite.scala:83: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val flumeStream = FlumeUtils.createStream(
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:96: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:103: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     var offsetRanges = Array[OffsetRange]()
[warn]                              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:107: trait HasOffsetRanges in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges = rdd.asInstanceOf[HasOffsetRanges].offsetRanges
[warn]                                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:148: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val kc = new KafkaCluster(kafkaParams)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:163: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:194: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val kc = new KafkaCluster(kafkaParams)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:209: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder, String](
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:251: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:340: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:414: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val kc = new KafkaCluster(kafkaParams)
[warn]                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:494: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val kc = new KafkaCluster(kafkaParams)
[warn]                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:565: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       kafkaStream: DStream[(K, V)]): Seq[(Time, Array[OffsetRange])] = {
[warn]                                      ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaClusterSuite.scala:30: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   private var kc: KafkaCluster = null
[warn]                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaClusterSuite.scala:40: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     kc = new KafkaCluster(Map("metadata.broker.list" -> kafkaTestUtils.brokerAddress))
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:64: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val offsetRanges = Array(OffsetRange(topic, 0, 0, messages.size))
[warn]                              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:66: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val rdd = KafkaUtils.createRDD[String, String, StringDecoder, StringDecoder](
[warn]               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:80: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val emptyRdd = KafkaUtils.createRDD[String, String, StringDecoder, StringDecoder](
[warn]                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:81: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       sc, kafkaParams, Array(OffsetRange(topic, 0, 0, 0)))
[warn]                              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:86: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val badRanges = Array(OffsetRange(topic, 0, 0, messages.size + 1))
[warn]                           ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:88: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       KafkaUtils.createRDD[String, String, StringDecoder, StringDecoder](
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:102: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val kc = new KafkaCluster(kafkaParams)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:113: trait HasOffsetRanges in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val ranges = rdd.get.asInstanceOf[HasOffsetRanges].offsetRanges
[warn]                                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:148: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   private def getRdd(kc: KafkaCluster, topics: Set[String]) = {
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:161: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]               OffsetRange(tp.topic, tp.partition, fromOffset, until(tp).offset)
[warn]               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:165: object Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]               tp -> Broker(lo.host, lo.port)
[warn]                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:168: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]           KafkaUtils.createRDD[String, String, StringDecoder, StringDecoder, String](
[warn]           ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaStreamSuite.scala:66: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val stream = KafkaUtils.createStream[String, String, StringDecoder, StringDecoder](
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/ReliableKafkaStreamSuite.scala:96: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val stream = KafkaUtils.createStream[String, String, StringDecoder, StringDecoder](
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/ReliableKafkaStreamSuite.scala:130: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val stream = KafkaUtils.createStream[String, String, StringDecoder, StringDecoder](
[warn]                  ^
[warn] two warnings found
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/kubernetes/core/src/test/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorPodsAllocatorSuite.scala:168: non-variable type argument org.apache.spark.deploy.k8s.KubernetesExecutorSpecificConf in type org.apache.spark.deploy.k8s.KubernetesConf[org.apache.spark.deploy.k8s.KubernetesExecutorSpecificConf] is unchecked since it is eliminated by erasure
[warn]         if (!argument.isInstanceOf[KubernetesConf[KubernetesExecutorSpecificConf]]) {
[warn]                                   ^
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/target/scala-2.11/spark-streaming-flume_2.11-2.4.6-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:113: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]         Resource.newBuilder().setRole("*")
[warn]                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:116: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]         Resource.newBuilder().setRole("*")
[warn]                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:121: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]         Resource.newBuilder().setRole("role2")
[warn]                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:124: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]         Resource.newBuilder().setRole("role2")
[warn]                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:138: method valueOf in object Status is deprecated: see corresponding Javadoc for more information.
[warn]     ).thenReturn(Status.valueOf(1))
[warn]                         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:151: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]     assert(cpus.exists(_.getRole() == "role2"))
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:152: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]     assert(cpus.exists(_.getRole() == "*"))
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:155: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]     assert(mem.exists(_.getRole() == "role2"))
[warn]                         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:156: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]     assert(mem.exists(_.getRole() == "*"))
[warn]                         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:417: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]         Resource.newBuilder().setRole("*")
[warn]                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:420: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]         Resource.newBuilder().setRole("*")
[warn]                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:271: method valueOf in object Status is deprecated: see corresponding Javadoc for more information.
[warn]     ).thenReturn(Status.valueOf(1))
[warn]                         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:272: method valueOf in object Status is deprecated: see corresponding Javadoc for more information.
[warn]     when(driver.declineOffer(mesosOffers.get(1).getId)).thenReturn(Status.valueOf(1))
[warn]                                                                           ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:273: method valueOf in object Status is deprecated: see corresponding Javadoc for more information.
[warn]     when(driver.declineOffer(mesosOffers.get(2).getId)).thenReturn(Status.valueOf(1))
[warn]                                                                           ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:299: method valueOf in object Status is deprecated: see corresponding Javadoc for more information.
[warn]     when(driver.declineOffer(mesosOffers2.get(0).getId)).thenReturn(Status.valueOf(1))
[warn]                                                                            ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:325: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       .setRole("prod")
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:329: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       .setRole("prod")
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:334: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       .setRole("dev")
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:339: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       .setRole("dev")
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:380: method valueOf in object Status is deprecated: see corresponding Javadoc for more information.
[warn]     ).thenReturn(Status.valueOf(1))
[warn]                         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:397: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]     assert(cpusDev.getRole.equals("dev"))
[warn]                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:400: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]       r.getName.equals("mem") && r.getScalar.getValue.equals(484.0) && r.getRole.equals("prod")
[warn]                                                                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:403: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]       r.getName.equals("cpus") && r.getScalar.getValue.equals(1.0) && r.getRole.equals("prod")
[warn]                                                                         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtilsSuite.scala:54: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]     role.foreach { r => builder.setRole(r) }
[warn]                                 ^
[warn] 29 warnings found
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/test/scala/org/apache/spark/streaming/kafka010/DirectKafkaStreamSuite.scala:253: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]       s.consumer.poll(0)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/test/scala/org/apache/spark/streaming/kafka010/DirectKafkaStreamSuite.scala:309: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]       s.consumer.poll(0)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/test/scala/org/apache/spark/streaming/kafka010/DirectKafkaStreamSuite.scala:473: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     consumer.poll(0)
[warn]              ^
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/test/scala/org/apache/spark/streaming/kafka010/KafkaTestUtils.scala:60: class ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead.
[warn]   private var zkUtils: ZkUtils = _
[warn]                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/test/scala/org/apache/spark/streaming/kafka010/KafkaTestUtils.scala:88: class ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead.
[warn]   def zookeeperClient: ZkUtils = {
[warn]                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/test/scala/org/apache/spark/streaming/kafka010/KafkaTestUtils.scala:100: object ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead.
[warn]     zkUtils = ZkUtils(s"$zkHost:$zkPort", zkSessionTimeout, zkConnectionTimeout, false)
[warn]               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/test/scala/org/apache/spark/streaming/kafka010/KafkaTestUtils.scala:178: method createTopic in object AdminUtils is deprecated: This method is deprecated and will be replaced by kafka.zk.AdminZkClient.
[warn]     AdminUtils.createTopic(zkUtils, topic, partitions, 1, config)
[warn]                ^
[info] Note: Some input files use or override a deprecated API.
[info] Note: Recompile with -Xlint:deprecation for details.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/target/scala-2.11/spark-streaming-kafka-0-8_2.11-2.4.6-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] 24 warnings found
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/target/scala-2.11/spark-mesos_2.11-2.4.6-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] one warning found
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/kubernetes/core/target/scala-2.11/spark-kubernetes_2.11-2.4.6-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] 7 warnings found
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/target/scala-2.11/spark-streaming-kafka-0-10_2.11-2.4.6-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/graphx/target/scala-2.11/spark-graphx_2.11-2.4.6-SNAPSHOT-tests.jar ...
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/yarn/target/scala-2.11/spark-yarn_2.11-2.4.6-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:360: class ParquetInputSplit in package hadoop is deprecated: see corresponding Javadoc for more information.
[warn]         new org.apache.parquet.hadoop.ParquetInputSplit(
[warn]                                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:371: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]         ParquetFileReader.readFooter(sharedConf, filePath, SKIP_ROW_GROUPS).getFileMetaData
[warn]                           ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:544: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]           ParquetFileReader.readFooter(
[warn]                             ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn]                                                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn]            ^
[info] Compiling 8 Scala sources and 2 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/target/scala-2.11/test-classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/streaming/target/scala-2.11/spark-streaming_2.11-2.4.6-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/test/scala/org/apache/spark/streaming/kinesis/KinesisInputDStreamBuilderSuite.scala:163: method initialPositionInStream in class Builder is deprecated: use initialPosition(initialPosition: KinesisInitialPosition)
[warn]         .initialPositionInStream(InitialPositionInStream.AT_TIMESTAMP)
[warn]          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/test/scala/org/apache/spark/streaming/kinesis/KinesisStreamSuite.scala:103: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]     val kinesisStream1 = KinesisUtils.createStream(ssc, "myAppName", "mySparkStream",
[warn]                                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/test/scala/org/apache/spark/streaming/kinesis/KinesisStreamSuite.scala:106: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]     val kinesisStream2 = KinesisUtils.createStream(ssc, "myAppName", "mySparkStream",
[warn]                                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/test/scala/org/apache/spark/streaming/kinesis/KinesisStreamSuite.scala:113: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]     val inputStream = KinesisUtils.createStream(ssc, appName, "dummyStream",
[warn]                                    ^
[warn] four warnings found
[info] Note: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/test/java/org/apache/spark/streaming/kinesis/JavaKinesisInputDStreamBuilderSuite.java uses or overrides a deprecated API.
[info] Note: Recompile with -Xlint:deprecation for details.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/target/scala-2.11/spark-streaming-kinesis-asl_2.11-2.4.6-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] 6 warnings found
[info] Note: Some input files use or override a deprecated API.
[info] Note: Recompile with -Xlint:deprecation for details.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:128: value ENABLE_JOB_SUMMARY in object ParquetOutputFormat is deprecated: see corresponding Javadoc for more information.
[warn]       && conf.get(ParquetOutputFormat.ENABLE_JOB_SUMMARY) == null) {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:360: class ParquetInputSplit in package hadoop is deprecated: see corresponding Javadoc for more information.
[warn]         new org.apache.parquet.hadoop.ParquetInputSplit(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:371: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]         ParquetFileReader.readFooter(sharedConf, filePath, SKIP_ROW_GROUPS).getFileMetaData
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:544: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]           ParquetFileReader.readFooter(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/target/scala-2.11/spark-sql_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 20 Scala sources and 1 Java source to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/target/scala-2.11/classes...
[info] Compiling 29 Scala sources and 2 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive/target/scala-2.11/classes...
[info] Compiling 10 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/target/scala-2.11/classes...
[info] Compiling 304 Scala sources and 5 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/target/scala-2.11/classes...
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaDataConsumer.scala:470: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     val p = consumer.poll(pollTimeoutMs)
[warn]                      ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:119: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]     consumer.poll(0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:139: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:187: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]       consumer.poll(0)
[warn]                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:217: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]       consumer.poll(0)
[warn]                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:293: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]           consumer.poll(0)
[warn]                    ^
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/target/scala-2.11/spark-avro_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[warn] 6 warnings found
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:119: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]     consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:139: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:187: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]       consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:217: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]       consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:293: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]           consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaDataConsumer.scala:470: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     val p = consumer.poll(pollTimeoutMs)
[warn] 
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/target/scala-2.11/spark-sql-kafka-0-10_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[warn] there were 16 deprecation warnings; re-run with -deprecation for details
[warn] one warning found
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive/target/scala-2.11/spark-hive_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 12 Scala sources and 171 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/target/scala-2.11/classes...
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Compiling 289 Scala sources and 33 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/target/scala-2.11/test-classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/catalyst/target/scala-2.11/spark-catalyst_2.11-2.4.6-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/src/gen/java/org/apache/hive/service/cli/thrift/TArrayTypeEntry.java:266:  [unchecked] unchecked call to read(TProtocol,T) as a member of the raw type IScheme
[warn]     schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
[warn]                                                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/where T is a type-variable:T extends TBase declared in interface IScheme
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/src/gen/java/org/apache/hive/service/cli/thrift/TArrayTypeEntry.java:270:  [unchecked] unchecked call to write(TProtocol,T) as a member of the raw type IScheme
[warn]     schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
[warn]                                                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/where T is a type-variable:T extends TBase declared in interface IScheme
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/src/gen/java/org/apache/hive/service/cli/thrift/TArrayTypeEntry.java:313:  [unchecked] getScheme() in TArrayTypeEntryStandardSchemeFactory implements <S>getScheme() in SchemeFactory
[warn]     public TArrayTypeEntryStandardScheme getScheme() {
[warn]                                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/return type requires unchecked conversion from TArrayTypeEntryStandardScheme to S
[warn]   where S is a type-variable:S extends IScheme declared in method <S>getScheme()
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/src/gen/java/org/apache/hive/service/cli/thrift/TArrayTypeEntry.java:361:  [unchecked] getScheme() in TArrayTypeEntryTupleSchemeFactory implements <S>getScheme() in SchemeFactory
[warn]     public TArrayTypeEntryTupleScheme getScheme() {
[warn]                                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/return type requires unchecked conversion from TArrayTypeEntryTupleScheme to S
[warn]   where S is a type-variable:S extends IScheme declared in method <S>getScheme()
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/src/gen/java/org/apache/hive/service/cli/thrift/TBinaryColumn.java:240:  [unchecked] unchecked cast
[warn]         setValues((List<ByteBuffer>)value);
[warn]                                     ^
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/target/scala-2.11/spark-hive-thriftserver_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:138: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn] object OneHotEncoder extends DefaultParamsReadable[OneHotEncoder] {
[warn]                              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:141: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]   override def load(path: String): OneHotEncoder = super.load(path)
[warn]                                    ^
[warn] two warnings found
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:138: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn] object OneHotEncoder extends DefaultParamsReadable[OneHotEncoder] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:141: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]   override def load(path: String): OneHotEncoder = super.load(path)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/target/scala-2.11/spark-mllib_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 6 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/target/scala-2.11/classes...
[info] Compiling 191 Scala sources and 128 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/examples/target/scala-2.11/classes...
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:53: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path
[warn]       if (addedClasspath != "") {
[warn]           ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:54: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path
[warn]         settings.classpath append addedClasspath
[warn]                                   ^
[warn] two warnings found
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:53: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path
[warn]       if (addedClasspath != "") {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:54: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path
[warn]         settings.classpath append addedClasspath
[warn] 
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/target/scala-2.11/spark-repl_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[info] Compiling 3 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/target/scala-2.11/test-classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/target/scala-2.11/spark-repl_2.11-2.4.6-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/assembly/target/scala-2.11/jars/spark-assembly_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/assembly/target/scala-2.11/jars/spark-assembly_2.11-2.4.6-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/examples/target/scala-2.11/jars/spark-examples_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/examples/target/scala-2.11/jars/spark-examples_2.11-2.4.6-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/sources/TextSocketStreamSuite.scala:393: non-variable type argument String in type (String, java.sql.Timestamp) is unchecked since it is eliminated by erasure
[warn]             .isInstanceOf[(String, Timestamp)])
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/sources/TextSocketStreamSuite.scala:392: non-variable type argument String in type (String, java.sql.Timestamp) is unchecked since it is eliminated by erasure
[warn]           assert(r.get().get(0, TextSocketReader.SCHEMA_TIMESTAMP)
[warn]                 ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQueryStatusAndProgressSuite.scala:204: postfix operator minute should be enabled
[warn] by making the implicit value scala.language.postfixOps visible.
[warn] This can be achieved by adding the import clause 'import scala.language.postfixOps'
[warn] or by setting the compiler option -language:postfixOps.
[warn] See the Scaladoc for value scala.language.postfixOps for a discussion
[warn] why the feature should be explicitly enabled.
[warn]         eventually(timeout(1 minute)) {
[warn]                              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala:693: a pure expression does nothing in statement position; you may be omitting necessary parentheses
[warn]       q1
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala:227: method explode in class Dataset is deprecated: use flatMap() or select() with functions.explode() instead
[warn]       df.explode("words", "word") { word: String => word.split(" ").toSeq }.select('word),
[warn]          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala:235: method explode in class Dataset is deprecated: use flatMap() or select() with functions.explode() instead
[warn]       df.explode('letters) {
[warn]          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala:285: method explode in class Dataset is deprecated: use flatMap() or select() with functions.explode() instead
[warn]       df.explode($"*") { case Row(prefix: String, csv: String) =>
[warn]          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala:292: method explode in class Dataset is deprecated: use flatMap() or select() with functions.explode() instead
[warn]       df.explode('prefix, 'csv) { case Row(prefix: String, csv: String) =>
[warn]          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowFramesSuite.scala:228: method rangeBetween in class WindowSpec is deprecated: Use the version with Long parameter types
[warn]     val window = Window.partitionBy($"value").orderBy($"key").rangeBetween(lit(0), lit(2))
[warn]                                                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowFramesSuite.scala:242: method rangeBetween in class WindowSpec is deprecated: Use the version with Long parameter types
[warn]     val window = Window.partitionBy($"value").orderBy($"key").rangeBetween(currentRow, lit(2.5D))
[warn]                                                               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowFramesSuite.scala:242: method currentRow in object functions is deprecated: Use Window.currentRow
[warn]     val window = Window.partitionBy($"value").orderBy($"key").rangeBetween(currentRow, lit(2.5D))
[warn]                                                                            ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowFramesSuite.scala:259: method rangeBetween in class WindowSpec is deprecated: Use the version with Long parameter types
[warn]       .rangeBetween(currentRow, lit(CalendarInterval.fromString("interval 23 days 4 hours")))
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowFramesSuite.scala:259: method currentRow in object functions is deprecated: Use Window.currentRow
[warn]       .rangeBetween(currentRow, lit(CalendarInterval.fromString("interval 23 days 4 hours")))
[warn]                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/ProcessingTimeSuite.scala:30: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn]     def getIntervalMs(trigger: Trigger): Long = trigger.asInstanceOf[ProcessingTime].intervalMs
[warn]                                                                      ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetCompatibilityTest.scala:49: method readAllFootersInParallel in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]       ParquetFileReader.readAllFootersInParallel(hadoopConf, parquetFiles, true)
[warn]                         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetInteroperabilitySuite.scala:178: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]                   ParquetFileReader.readFooter(hadoopConf, part.getPath, NO_FILTER)
[warn]                                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetTest.scala:133: method writeMetadataFile in object ParquetFileWriter is deprecated: see corresponding Javadoc for more information.
[warn]     ParquetFileWriter.writeMetadataFile(configuration, path, Seq(footer).asJava)
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetTest.scala:148: method writeMetadataFile in object ParquetFileWriter is deprecated: see corresponding Javadoc for more information.
[warn]     ParquetFileWriter.writeMetadataFile(configuration, path, Seq(footer).asJava)
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetTest.scala:154: method readAllFootersInParallel in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]     ParquetFileReader.readAllFootersInParallel(configuration, fs.getFileStatus(path)).asScala.toSeq
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetTest.scala:158: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]     ParquetFileReader.readFooter(
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/ProcessingTimeExecutorSuite.scala:55: method apply in object ProcessingTime is deprecated: use Trigger.ProcessingTime(interval)
[warn]     val executor = ProcessingTimeExecutor(ProcessingTime("1000 milliseconds"), clock)
[warn]                                           ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala:316: method apply in object ProcessingTime is deprecated: use Trigger.ProcessingTime(interval)
[warn]       StartStream(ProcessingTime("10 seconds"), new StreamManualClock),
[warn]                   ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala:357: method apply in object ProcessingTime is deprecated: use Trigger.ProcessingTime(interval)
[warn]       StartStream(ProcessingTime("10 seconds"), new StreamManualClock(60 * 1000)),
[warn]                   ^
[warn] 23 warnings found
[info] Note: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/java/test/org/apache/spark/sql/JavaDataFrameSuite.java uses or overrides a deprecated API.
[info] Note: Recompile with -Xlint:deprecation for details.
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Compiling 14 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/target/scala-2.11/test-classes...
[info] Compiling 5 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/target/scala-2.11/test-classes...
[info] Compiling 9 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/target/scala-2.11/test-classes...
[info] Compiling 87 Scala sources and 17 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive/target/scala-2.11/test-classes...
[info] Compiling 193 Scala sources and 66 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/target/scala-2.11/test-classes...
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/target/scala-2.11/spark-sql_2.11-2.4.6-SNAPSHOT-tests.jar ...
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/HiveCliSessionStateSuite.scala:31: a pure expression does nothing in statement position; you may be omitting necessary parentheses
[warn]     try f finally SessionState.detachSession()
[warn]         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaContinuousTest.scala:76: reflective access of structural type member value activeTaskIdCount should be enabled
[warn] by making the implicit value scala.language.reflectiveCalls visible.
[warn] This can be achieved by adding the import clause 'import scala.language.reflectiveCalls'
[warn] or by setting the compiler option -language:reflectiveCalls.
[warn] See the Scaladoc for value scala.language.reflectiveCalls for a discussion
[warn] why the feature should be explicitly enabled.
[warn]       assert(tasksEndedListener.activeTaskIdCount.get() == 0)
[warn]                                 ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:140: constructor Field in class Field is deprecated: see corresponding Javadoc for more information.
[warn]         Seq(new Field("null", Schema.create(Type.NULL), "doc", null)).asJava
[warn]             ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:163: constructor Field in class Field is deprecated: see corresponding Javadoc for more information.
[warn]         val fields = Seq(new Field("field1", union, "doc", null)).asJava
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:191: constructor Field in class Field is deprecated: see corresponding Javadoc for more information.
[warn]         val fields = Seq(new Field("field1", union, "doc", null)).asJava
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:223: constructor Field in class Field is deprecated: see corresponding Javadoc for more information.
[warn]         val fields = Seq(new Field("field1", union, "doc", null)).asJava
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:249: constructor Field in class Field is deprecated: see corresponding Javadoc for more information.
[warn]       val fields = Seq(new Field("field1", UnionOfOne, "doc", null)).asJava
[warn]                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:302: constructor Field in class Field is deprecated: see corresponding Javadoc for more information.
[warn]         new Field("field1", complexUnionType, "doc", null),
[warn]         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:303: constructor Field in class Field is deprecated: see corresponding Javadoc for more information.
[warn]         new Field("field2", complexUnionType, "doc", null),
[warn]         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:304: constructor Field in class Field is deprecated: see corresponding Javadoc for more information.
[warn]         new Field("field3", complexUnionType, "doc", null),
[warn]         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:305: constructor Field in class Field is deprecated: see corresponding Javadoc for more information.
[warn]         new Field("field4", complexUnionType, "doc", null)
[warn]         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:927: constructor Field in class Field is deprecated: see corresponding Javadoc for more information.
[warn]       val avroField = new Field(name, avroType, "", null)
[warn]                       ^
[warn] one warning found
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/target/scala-2.11/spark-hive-thriftserver_2.11-2.4.6-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:66: class ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead.
[warn]   private var zkUtils: ZkUtils = _
[warn]                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:95: class ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead.
[warn]   def zookeeperClient: ZkUtils = {
[warn]                        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:107: object ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead.
[warn]     zkUtils = ZkUtils(s"$zkHost:$zkPort", zkSessionTimeout, zkConnectionTimeout, false)
[warn]               ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:198: method createTopic in object AdminUtils is deprecated: This method is deprecated and will be replaced by kafka.zk.AdminZkClient.
[warn]         AdminUtils.createTopic(zkUtils, topic, partitions, 1)
[warn]                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:225: method deleteTopic in object AdminUtils is deprecated: This method is deprecated and will be replaced by kafka.zk.AdminZkClient.
[warn]     AdminUtils.deleteTopic(zkUtils, topic)
[warn]                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:290: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     kc.poll(0)
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:304: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     kc.poll(0)
[warn]        ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:383: object ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead.
[warn]       !zkUtils.pathExists(getDeleteTopicPath(topic)),
[warn]                           ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:384: object ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead.
[warn]       s"${getDeleteTopicPath(topic)} still exists")
[warn]           ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:385: object ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead.
[warn]     assert(!zkUtils.pathExists(getTopicPath(topic)), s"${getTopicPath(topic)} still exists")
[warn]                                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:385: object ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead.
[warn]     assert(!zkUtils.pathExists(getTopicPath(topic)), s"${getTopicPath(topic)} still exists")
[warn]                                                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:409: class ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead.
[warn]       zkUtils: ZkUtils,
[warn]                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:421: method deleteTopic in object AdminUtils is deprecated: This method is deprecated and will be replaced by kafka.zk.AdminZkClient.
[warn]           AdminUtils.deleteTopic(zkUtils, topic)
[warn]                      ^
[warn] 10 warnings found
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/target/scala-2.11/spark-avro_2.11-2.4.6-SNAPSHOT-tests.jar ...
[info] Done packaging.
[info] Done packaging.
[warn] 14 warnings found
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/target/scala-2.11/spark-sql-kafka-0-10_2.11-2.4.6-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] there were 25 deprecation warnings; re-run with -deprecation for details
[warn] one warning found
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive/src/test/java/org/apache/spark/sql/hive/test/Complex.java:464:  [unchecked] unchecked cast
[warn]         setLint((List<Integer>)value);
[warn]                                ^
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive/target/scala-2.11/spark-hive_2.11-2.4.6-SNAPSHOT-tests.jar ...
[info] Done packaging.
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/clustering/KMeansSuite.scala:120: method computeCost in class KMeansModel is deprecated: This method is deprecated and will be removed in 3.0.0. Use ClusteringEvaluator instead. You can also get the cost on the training dataset in the summary.
[warn]     assert(model.computeCost(dataset) < 0.1)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/clustering/KMeansSuite.scala:135: method computeCost in class KMeansModel is deprecated: This method is deprecated and will be removed in 3.0.0. Use ClusteringEvaluator instead. You can also get the cost on the training dataset in the summary.
[warn]     assert(model.computeCost(dataset) == summary.trainingCost)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/clustering/KMeansSuite.scala:206: method computeCost in class KMeansModel is deprecated: This method is deprecated and will be removed in 3.0.0. Use ClusteringEvaluator instead. You can also get the cost on the training dataset in the summary.
[warn]       model.computeCost(dataset)
[warn]             ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/feature/OneHotEncoderSuite.scala:46: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]     ParamsSuite.checkParams(new OneHotEncoder)
[warn]                                 ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/feature/OneHotEncoderSuite.scala:51: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]     val encoder = new OneHotEncoder()
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/feature/OneHotEncoderSuite.scala:74: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]     val encoder = new OneHotEncoder()
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/feature/OneHotEncoderSuite.scala:96: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]     val encoder = new OneHotEncoder()
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/feature/OneHotEncoderSuite.scala:110: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]     val encoder = new OneHotEncoder()
[warn]                       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/feature/OneHotEncoderSuite.scala:121: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]     val t = new OneHotEncoder()
[warn]                 ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/feature/OneHotEncoderSuite.scala:156: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]       val encoder = new OneHotEncoder()
[warn]                         ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:52: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     var df = readImages(imagePath)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:55: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     df = readImages(imagePath, null, true, -1, false, 1.0, 0)
[warn]          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:58: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     df = readImages(imagePath, null, true, -1, true, 1.0, 0)
[warn]          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:62: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     df = readImages(imagePath, null, true, -1, true, 0.5, 0)
[warn]          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:69: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     val df = readImages(imagePath, null, false, 3, true, 1.0, 0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:74: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     val df = readImages(imagePath + "/kittens/DP153539.jpg", null, false, 3, true, 1.0, 0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:79: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     val df = readImages(imagePath + "/multi-channel/BGRA.png", null, false, 3, true, 1.0, 0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:84: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     val df = readImages(imagePath + "/kittens/not-image.txt", null, false, 3, true, 1.0, 0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:90: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     val df = readImages(imagePath + "/kittens/not-image.txt", null, false, 3, false, 1.0, 0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:96: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]       readImages(imagePath, null, true, 3, true, 1.1, 0)
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:103: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]       readImages(imagePath, null, true, 3, true, -0.1, 0)
[warn]       ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:109: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     val df = readImages(imagePath, null, true, 3, true, 0.0, 0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:114: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     val df = readImages(imagePath, sparkSession = spark, true, 3, true, 1.0, 0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:119: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     val df = readImages(imagePath, null, true, 3, true, 1.0, 0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:124: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     val df = readImages(imagePath, null, true, -3, true, 1.0, 0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:129: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     val df = readImages(imagePath, null, true, 0, true, 1.0, 0)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:136: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0.
[warn]     val images = readImages(imagePath + "/multi-channel/").collect
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/classification/LogisticRegressionSuite.scala:227: constructor LogisticRegressionWithSGD in class LogisticRegressionWithSGD is deprecated: Use ml.classification.LogisticRegression or LogisticRegressionWithLBFGS
[warn]     val lr = new LogisticRegressionWithSGD().setIntercept(true)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/classification/LogisticRegressionSuite.scala:303: constructor LogisticRegressionWithSGD in class LogisticRegressionWithSGD is deprecated: Use ml.classification.LogisticRegression or LogisticRegressionWithLBFGS
[warn]     val lr = new LogisticRegressionWithSGD().setIntercept(true)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/classification/LogisticRegressionSuite.scala:338: constructor LogisticRegressionWithSGD in class LogisticRegressionWithSGD is deprecated: Use ml.classification.LogisticRegression or LogisticRegressionWithLBFGS
[warn]     val lr = new LogisticRegressionWithSGD().setIntercept(true)
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/classification/LogisticRegressionSuite.scala:919: object LogisticRegressionWithSGD in package classification is deprecated: Use ml.classification.LogisticRegression or LogisticRegressionWithLBFGS
[warn]     val model = LogisticRegressionWithSGD.train(points, 2)
[warn]                 ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/clustering/KMeansSuite.scala:369: method train in object KMeans is deprecated: Use train method without 'runs'
[warn]       val model = KMeans.train(points, 2, 2, 1, initMode)
[warn]                          ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/evaluation/MulticlassMetricsSuite.scala:80: value precision in class MulticlassMetrics is deprecated: Use accuracy.
[warn]     assert(math.abs(metrics.accuracy - metrics.precision) < delta)
[warn]                                                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/evaluation/MulticlassMetricsSuite.scala:81: value recall in class MulticlassMetrics is deprecated: Use accuracy.
[warn]     assert(math.abs(metrics.accuracy - metrics.recall) < delta)
[warn]                                                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/evaluation/MulticlassMetricsSuite.scala:82: value fMeasure in class MulticlassMetrics is deprecated: Use accuracy.
[warn]     assert(math.abs(metrics.accuracy - metrics.fMeasure) < delta)
[warn]                                                ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/LassoSuite.scala:58: constructor LassoWithSGD in class LassoWithSGD is deprecated: Use ml.regression.LinearRegression with elasticNetParam = 1.0. Note the default regParam is 0.01 for LassoWithSGD, but is 0.0 for LinearRegression.
[warn]     val ls = new LassoWithSGD()
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/LassoSuite.scala:102: constructor LassoWithSGD in class LassoWithSGD is deprecated: Use ml.regression.LinearRegression with elasticNetParam = 1.0. Note the default regParam is 0.01 for LassoWithSGD, but is 0.0 for LinearRegression.
[warn]     val ls = new LassoWithSGD()
[warn]              ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/LassoSuite.scala:156: object LassoWithSGD in package regression is deprecated: Use ml.regression.LinearRegression with elasticNetParam = 1.0. Note the default regParam is 0.01 for LassoWithSGD, but is 0.0 for LinearRegression.
[warn]     val model = LassoWithSGD.train(points, 2)
[warn]                 ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/LinearRegressionSuite.scala:49: constructor LinearRegressionWithSGD in class LinearRegressionWithSGD is deprecated: Use ml.regression.LinearRegression or LBFGS
[warn]     val linReg = new LinearRegressionWithSGD().setIntercept(true)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/LinearRegressionSuite.scala:75: constructor LinearRegressionWithSGD in class LinearRegressionWithSGD is deprecated: Use ml.regression.LinearRegression or LBFGS
[warn]     val linReg = new LinearRegressionWithSGD().setIntercept(false)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/LinearRegressionSuite.scala:106: constructor LinearRegressionWithSGD in class LinearRegressionWithSGD is deprecated: Use ml.regression.LinearRegression or LBFGS
[warn]     val linReg = new LinearRegressionWithSGD().setIntercept(false)
[warn]                  ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/LinearRegressionSuite.scala:163: object LinearRegressionWithSGD in package regression is deprecated: Use ml.regression.LinearRegression or LBFGS
[warn]     val model = LinearRegressionWithSGD.train(points, 2)
[warn]                 ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/RidgeRegressionSuite.scala:63: constructor LinearRegressionWithSGD in class LinearRegressionWithSGD is deprecated: Use ml.regression.LinearRegression or LBFGS
[warn]     val linearReg = new LinearRegressionWithSGD()
[warn]                     ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/RidgeRegressionSuite.scala:71: constructor RidgeRegressionWithSGD in class RidgeRegressionWithSGD is deprecated: Use ml.regression.LinearRegression with elasticNetParam = 0.0. Note the default regParam is 0.01 for RidgeRegressionWithSGD, but is 0.0 for LinearRegression.
[warn]     val ridgeReg = new RidgeRegressionWithSGD()
[warn]                    ^
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/RidgeRegressionSuite.scala:113: object RidgeRegressionWithSGD in package regression is deprecated: Use ml.regression.LinearRegression with elasticNetParam = 0.0. Note the default regParam is 0.01 for RidgeRegressionWithSGD, but is 0.0 for LinearRegression.
[warn]     val model = RidgeRegressionWithSGD.train(points, 2)
[warn]                 ^
[warn] 45 warnings found
[info] Note: Some input files use or override a deprecated API.
[info] Note: Recompile with -Xlint:deprecation for details.
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/target/scala-2.11/spark-mllib_2.11-2.4.6-SNAPSHOT-tests.jar ...
[info] Done packaging.
[success] Total time: 783 s, completed Feb 19, 2020 1:03:53 AM
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:161: method isRunningLocally in class TaskContext is deprecated: Local execution was removed, so this always returns false
[warn]   override def isRunningLocally(): Boolean = taskContext.isRunningLocally()
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/scheduler/StageInfo.scala:59: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]   def attemptNumber(): Int = attemptId
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/api/r/RBackend.scala:102: method childGroup in class ServerBootstrap is deprecated: see corresponding Javadoc for more information.
[warn]     if (bootstrap != null && bootstrap.childGroup() != null) {
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/spark-core_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:633: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder](
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:647: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:648: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker]): JavaRDD[(Array[Byte], Array[Byte])] = {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:657: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:658: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker]): JavaRDD[Array[Byte]] = {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:670: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:671: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:673: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createRDD[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V](
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:676: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges.toArray(new Array[OffsetRange](offsetRanges.size())),
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:720: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val kc = new KafkaCluster(Map(kafkaParams.asScala.toSeq: _*))
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:721: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       KafkaUtils.getFromOffsets(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:725: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createDirectStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V](
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def createBroker(host: String, port: JInt): Broker = Broker(host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: object Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def createBroker(host: String, port: JInt): Broker = Broker(host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:740: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def offsetRangesOfKafkaRDD(rdd: RDD[_]): JList[OffsetRange] = {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:89: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   protected val kc = new KafkaCluster(kafkaParams)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:172: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       OffsetRange(tp.topic, tp.partition, fo, uo.offset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:217: object KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val leaders = KafkaCluster.checkErrors(kc.findLeaders(topics))
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:222: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]            context.sparkContext, kafkaParams, b.map(OffsetRange(_)), leaders, messageHandler)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:58: trait HasOffsetRanges in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   ) extends RDD[R](sc, Nil) with Logging with HasOffsetRanges {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val offsetRanges: Array[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val offsetRanges: Array[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:152: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val kc = new KafkaCluster(kafkaParams)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:268: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]         OffsetRange(tp.topic, tp.partition, fo, uo.offset)
[warn] 
[info] Checking every *.class/*.jar file's SHA-1.
[warn] Strategy 'discard' was applied to a file
[warn] Strategy 'filterDistinctLines' was applied to 7 files
[warn] Strategy 'first' was applied to 95 files
[info] SHA-1: c456bd20175ed7d25c436ae2b911d2efacda4d00
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8-assembly/target/scala-2.11/spark-streaming-kafka-0-8-assembly-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[success] Total time: 36 s, completed Feb 19, 2020 1:04:29 AM
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:161: method isRunningLocally in class TaskContext is deprecated: Local execution was removed, so this always returns false
[warn]   override def isRunningLocally(): Boolean = taskContext.isRunningLocally()
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/scheduler/StageInfo.scala:59: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]   def attemptNumber(): Int = attemptId
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/api/r/RBackend.scala:102: method childGroup in class ServerBootstrap is deprecated: see corresponding Javadoc for more information.
[warn]     if (bootstrap != null && bootstrap.childGroup() != null) {
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/spark-core_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:259: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val dstream = FlumeUtils.createStream(jssc, hostname, port, storageLevel, enableDecompression)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:275: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val dstream = FlumeUtils.createPollingStream(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createPollingStream(ssc, host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createPollingStream(ssc, host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumeEventCount.scala:59: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createStream(ssc, host, port, StorageLevel.MEMORY_ONLY_SER_2)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Checking every *.class/*.jar file's SHA-1.
[warn] Strategy 'discard' was applied to a file
[warn] Strategy 'filterDistinctLines' was applied to 7 files
[warn] Strategy 'first' was applied to 88 files
[info] SHA-1: 4a09a03ca4698790156734a41df76211c455c026
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-assembly/target/scala-2.11/spark-streaming-flume-assembly-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[success] Total time: 43 s, completed Feb 19, 2020 1:05:12 AM
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:161: method isRunningLocally in class TaskContext is deprecated: Local execution was removed, so this always returns false
[warn]   override def isRunningLocally(): Boolean = taskContext.isRunningLocally()
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/scheduler/StageInfo.scala:59: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]   def attemptNumber(): Int = attemptId
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/api/r/RBackend.scala:102: method childGroup in class ServerBootstrap is deprecated: see corresponding Javadoc for more information.
[warn]     if (bootstrap != null && bootstrap.childGroup() != null) {
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/spark-core_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:140: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]   private val client = new AmazonKinesisClient(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:150: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]   client.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:219: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information.
[warn]     getRecordsRequest.setRequestCredentials(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:240: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information.
[warn]     getShardIteratorRequest.setRequestCredentials(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:606: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]       KinesisUtils.createStream(jssc.ssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:613: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]         KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:616: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]         KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:107: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val kinesisClient = new AmazonKinesisClient(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:108: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     kinesisClient.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:223: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val kinesisClient = new AmazonKinesisClient(new DefaultAWSCredentialsProviderChain())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:224: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     kinesisClient.setEndpoint(endpoint)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/SparkAWSCredentials.scala:76: method withLongLivedCredentialsProvider in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       .withLongLivedCredentialsProvider(longLivedCreds.provider)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:58: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val client = new AmazonKinesisClient(KinesisTestUtils.getAWSCredentials())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:59: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     client.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:64: constructor AmazonDynamoDBClient in class AmazonDynamoDBClient is deprecated: see corresponding Javadoc for more information.
[warn]     val dynamoDBClient = new AmazonDynamoDBClient(new DefaultAWSCredentialsProviderChain())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:65: method setRegion in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     dynamoDBClient.setRegion(RegionUtils.getRegion(regionName))
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisReceiver.scala:187: constructor Worker in class Worker is deprecated: see corresponding Javadoc for more information.
[warn]     worker = new Worker(recordProcessorFactory, kinesisClientLibConfiguration)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Checking every *.class/*.jar file's SHA-1.
[warn] Strategy 'discard' was applied to 2 files
[warn] Strategy 'filterDistinctLines' was applied to 8 files
[warn] Strategy 'first' was applied to 50 files
[info] SHA-1: 29391387b6e8190318e41349d54e09105e3321a1
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl-assembly/target/scala-2.11/spark-streaming-kinesis-asl-assembly-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[success] Total time: 50 s, completed Feb 19, 2020 1:06:02 AM

========================================================================
Detecting binary incompatibilities with MiMa
========================================================================
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.rules.RuleExecutor.Strategy
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NamedQueryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.InBlock
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.TrainValidationSplit.TrainValidationSplitReader
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.LocalLDAModel.SaveLoadV1_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.Main.MainClassOptionParser
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.util.DefaultParamsReader.Metadata
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.OpenHashMapBasedStateMap.StateInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.rpc.netty.RpcEndpointVerifier.CheckExistence
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SetupDriver
Error instrumenting class:org.apache.spark.mapred.SparkHadoopMapRedUtil$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.MultilayerPerceptronClassifierWrapper.MultilayerPerceptronClassifierWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.StringIndexModelWriter.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherProtocol.Hello
[WARN] Unable to detect inner functions for class:org.apache.spark.security.CryptoStreamUtils.CryptoHelperChannel
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ImputerModel.ImputerModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IdentifierCommentListContext
[WARN] Unable to detect inner functions for class:org.apache.spark.util.SignalUtils.ActionHandler
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IntervalContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QuerySpecificationContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator17$1
[WARN] Unable to detect inner functions for class:org.apache.spark.util.Benchmark.Result
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.plans
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.DateAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.OneHotEncoderModel.OneHotEncoderModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.ImplicitTypeCasts
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveRdd
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleTableIdentifierContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.ZooKeeperLeaderElectionAgent.LeadershipStatus
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.UnsetTablePropertiesContext
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillTask
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AggregationContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LogisticRegressionWrapper.LogisticRegressionWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IntervalValueContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.FeatureHasher.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel.MultilayerPerceptronClassificationModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.RunLengthEncoding.Decoder
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LogisticRegressionModel.LogisticRegressionModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterChanged
[WARN] Unable to detect inner functions for class:org.apache.spark.rdd.DefaultPartitionCoalescer.PartitionLocations
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.StateStoreAwareZipPartitionsHelper
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.GeneralizedLinearRegressionModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.SparkAppHandle.Listener
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.debug.DebugExec
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LDAWrapper.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.api.python.SerDeUtil.ArrayConstructor
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestWorkerState
Error instrumenting class:org.apache.spark.sql.execution.streaming.HDFSMetadataLog$FileSystemManager
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.KVTypeInfo.Accessor
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.UnsafeSorterSpillMerger.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisteredWorker
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterApplication
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FromClauseContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateKeyWatermarkPredicate
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ColumnReferenceContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterChangeAcknowledged
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QueryTermDefaultContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ComparisonContext
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetMatchingBlockIds
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutors
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.PythonMLLibAPI.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ApplicationRemoved
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyAndNumValues
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QueryTermContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0.Data
Error instrumenting class:org.apache.spark.input.StreamInputFormat
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.CaseWhenCoercion
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.RetryingBlockFetcher.BlockFetchStarter
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RetrieveSparkAppConfig
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ProbabilisticClassificationModel.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator16$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.BisectingKMeansModel.$$typecreator1$1
Error instrumenting class:org.apache.spark.mllib.regression.IsotonicRegressionModel$SaveLoadV1_0$
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.PrefixSpan.Prefix
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyToNumValuesType
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.InMemoryFileIndex.SerializableBlockLocation
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.stat.test.ChiSqTest.Method
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ExtractWindowExpressions
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator8$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.rules.RuleExecutor.Batch
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.ChiSquareTest.ChiSquareResult
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.LevelDB.PrefixCache
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AliasedRelationContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.PrefixSpan.FreqSequence
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.DecisionTreeRegressionModel.DecisionTreeRegressionModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.RelationalGroupedDataset.CubeType
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator9$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveGroupingAnalytics
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Error instrumenting class:org.apache.spark.deploy.SparkSubmit$
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.NodeData
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LogisticRegressionModel.LogisticRegressionModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.Pipeline.PipelineReader
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.ArrayWrappers.ComparableObjectArray
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.Division
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator7$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ClassificationModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.KMeansModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.scheduler.ReceiverTracker.TrackerState
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.streaming.StreamingQueryListener.QueryTerminatedEvent
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ParenthesizedExpressionContext
Error instrumenting class:org.apache.spark.mllib.clustering.LocalLDAModel$SaveLoadV1_0$
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableProviderContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.ResolveHints.ResolveBroadcastHints
Error instrumenting class:org.apache.spark.sql.execution.command.DDLUtils$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.DecisionTreeRegressorWrapper.DecisionTreeRegressorWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.UnregisterApplication
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SampleByBytesContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterWorkerFailed
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.SplitData
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LDAWrapper.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.optimizer.JoinReorderDP.JoinPlan
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.expressions
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateFunctionContext
Error instrumenting class:org.apache.spark.sql.execution.streaming.CommitLog
[WARN] Unable to detect inner functions for class:org.apache.spark.network.client.TransportClient.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LDAWrapper.LDAWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.LocationSpecContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.PartitioningUtils.PartitionValues
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper.GeneralizedLinearRegressionWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.IDFModelWriter.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.DriverStatusResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.optimization.LBFGS.CostFun
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.OneHotEncoderModel.OneHotEncoderModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TablePropertyKeyContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.param.shared.SharedParamsCodeGen.ParamDesc
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.GaussianMixtureModel.SaveLoadV1_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.HDFSBackedStateStore.COMMITTED
[WARN] Unable to detect inner functions for class:org.apache.spark.network.client.TransportClientFactory.ClientPool
Error instrumenting class:org.apache.spark.scheduler.SplitInfo$
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.SparkBuildInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AddTablePartitionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SmallIntLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.python.MLSerDe.SparseMatrixPickler
Error instrumenting class:org.apache.spark.api.python.DoubleArrayWritable
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.TrainValidationSplitModel.TrainValidationSplitModelReader
Error instrumenting class:org.apache.spark.sql.execution.datasources.orc.OrcUtils$
[WARN] Unable to detect inner functions for class:org.apache.spark.api.java.JavaUtils.SerializableMapWrapper
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyToNumValuesStore
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.DecisionTreeRegressorWrapper.DecisionTreeRegressorWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.AnalysisErrorAt
[WARN] Unable to detect inner functions for class:org.apache.spark.ui.JettyUtils.ServletParams
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.rules.RuleExecutor.Once
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RepairTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.EnsembleModelReadWrite.EnsembleNodeData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.DataSource.SourceInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.SparkSubmitUtils.MavenCoordinate
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.aggregate.ApproximatePercentile.PercentileDigest
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NamedExpressionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RowConstructorContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.OptimizeMetadataOnlyQuery.PartitionedRelation
Error instrumenting class:org.apache.spark.deploy.SparkHadoopUtil$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorSizeHint.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.AssociationRules.Rule
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette.ClusterStats
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.RepeatedGroupConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeans.ClusterSummaryAggregator
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.CheckpointWriter.CheckpointWriteHandler
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.WindowDefContext
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalSorter.SpillReader
[WARN] Unable to detect inner functions for class:org.apache.spark.rpc.netty.Dispatcher.EndpointData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveNewInstance
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.fpm.FPGrowthModel.FPGrowthModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.DateConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.optim.WeightedLeastSquares.Aggregator
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalSorter.IteratorForPartition
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.MaxAbsScalerModelWriter.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.MapConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetDatabasePropertiesContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetQuantifierContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.MemorySink.AddedData
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.client.StandaloneAppClient.ClientEndpoint
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveShuffle
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CtesContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.plans.DslLogicalPlan
[WARN] Unable to detect inner functions for class:org.apache.spark.annotation.InterfaceStability.Evolving
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GBTRegressorWrapper.GBTRegressorWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.KMeansWrapper.KMeansWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.PowerIterationClusteringModel.SaveLoadV1_0.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.scheduler.ReceiverTracker.ReceiverTrackerEndpoint
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.BisectingKMeansWrapper.BisectingKMeansWrapperWriter
Error instrumenting class:org.apache.spark.sql.execution.streaming.HDFSMetadataLog$FileContextManager
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.IdentityProjection
Error instrumenting class:org.apache.spark.internal.io.HadoopMapRedCommitProtocol
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.UnsafeInMemorySorter.$SortedIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.MultiInsertQueryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.PCAModelWriter.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.fpm.FPGrowthModel.FPGrowthModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel.MultilayerPerceptronClassificationModelWriter.Data
Error instrumenting class:org.apache.spark.sql.execution.streaming.OffsetSeqLog
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.ChiSquareTest.$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SubqueryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.WidenSetOperationTypes
[WARN] Unable to detect inner functions for class:org.apache.spark.network.crypto.TransportCipher.EncryptionHandler
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LDAModel.$$typecreator2$1
Error instrumenting class:org.apache.spark.internal.io.HadoopMapReduceWriteConfigUtil
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GBTRegressionModel.GBTRegressionModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LogisticRegressionWrapper.LogisticRegressionWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.DateTimeOperations
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SetupDriver
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetMatchingBlockIds
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyWithIndexAndValue
[WARN] Unable to detect inner functions for class:org.apache.spark.InternalAccumulator.output
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.CatalystTypeConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.SparkConf.DeprecatedConfig
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StrictIdentifierContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ComparisonOperatorContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.EltCoercion
[WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SparkSaslClient.$ClientCallbackHandler
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.DriverStatusResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutorResponse
Error instrumenting class:org.apache.spark.launcher.InProcessLauncher
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.BlacklistTracker.ExecutorFailureList
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.ResolveTableValuedFunctions.ArgumentList
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.receiver.BlockGenerator.Block
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SparkAppConfig
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator7$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.sources.MemorySinkV2.AddedData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.BigIntLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.network.server.TransportServer.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.Count
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RetrieveLastAllocatedExecutorId
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StopWordsRemover.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.SignedPrefixComparatorDesc
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator11$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.LongDelta.Encoder
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.GaussianMixtureModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.FloatConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.BinaryPrefixComparator
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.CountVectorizerModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.rdd.InputFileBlockHolder.FileBlock
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FailNativeCommandContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ExpressionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCheckResult.TypeCheckFailure
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleTableSchemaContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.optimizer.StarSchemaDetection.TableAccessCardinality
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator12$1
[WARN] Unable to detect inner functions for class:org.apache.spark.util.Benchmark.Timer
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.StringIndexModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.optim.WeightedLeastSquares.Cholesky
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LDAWrapper.LDAWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PredicateOperatorContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.stat.test.KolmogorovSmirnovTest.NullHypothesis
Error instrumenting class:org.apache.spark.deploy.master.ui.MasterWebUI
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.RelationalGroupedDataset.GroupType
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.ArrayConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Family
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.history.AppListingListener.MutableAttemptInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.BlockManagerHeartbeat
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.DecisionTreeClassificationModel.DecisionTreeClassificationModelWriter.$$typecreator1$1
Error instrumenting class:org.apache.spark.sql.execution.datasources.DataSource$
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StopExecutors
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillDriver
Error instrumenting class:org.apache.spark.mllib.clustering.GaussianMixtureModel$SaveLoadV1_0$
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.GaussianMixtureModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.RelationalGroupedDataset.RollupType
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DropTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.mesos.MesosExternalShuffleClient.1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableValuedFunctionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.IntDelta.Decoder
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.StopBlockManagerMaster
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.KMeansModelReader.$$typecreator7$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.ALSModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.OneForOneBlockFetcher.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.aggregate.TypedAverage.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeFixedWidthAggregationMap.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.Min
Error instrumenting class:org.apache.spark.sql.execution.streaming.state.StateStoreProvider$
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.StorageStatus.NonRddStorageInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ColTypeContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.OrderedIdentifierContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.FileFormatWriter.EmptyDirectoryWriteTask
20/02/19 01:07:09 WARN BLAS: Failed to load implementation from: com.github.fommil.netlib.NativeSystemBLAS
20/02/19 01:07:09 WARN BLAS: Failed to load implementation from: com.github.fommil.netlib.NativeRefBLAS
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.FileFormatWriter.OutputSpec
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.RandomForestClassificationModel.RandomForestClassificationModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerLatestState
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAttributeRewriter.VectorAttributeRewriterWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.linalg.distributed.RowMatrix.$SVDMode$2$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeExternalRowSorter.PrefixComputer
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.OneHotEncoderModel.OneHotEncoderModelWriter.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.PromoteStrings
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.StreamingDeduplicationStrategy
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.ArrayWrappers.ComparableLongArray
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InsertIntoContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.RandomForestRegressionModel.RandomForestRegressionModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetLocations
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LinearSVCModel.LinearSVCWriter.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.Rating
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NumericLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PartitionValContext
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.BlockManagerHeartbeat
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QueryPrimaryDefaultContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LinearSVCModel.LinearSVCWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Tweedie
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel.BucketedRandomProjectionLSHModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.UnsignedPrefixComparator
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorSizeHint.$$typecreator3$1
Error instrumenting class:org.apache.spark.sql.execution.datasources.TextBasedFileFormat
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowDatabasesContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InlineTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.GBTClassificationModel.GBTClassificationModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager
[WARN] Unable to detect inner functions for class:org.apache.spark.network.util.TransportFrameDecoder.Interceptor
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerRemoved
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.FixedLengthRowBasedKeyValueBatch.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Bucketizer.BucketizerWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDe.LabeledPointPickler
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.CoalesceExec.EmptyPartition
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Index
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.SuccessFetchResult
[WARN] Unable to detect inner functions for class:org.apache.spark.network.util.LevelDBProvider.LevelDBLogger
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.ToBlockManagerSlave
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.CrossValidatorModel.CrossValidatorModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetArrayConverter.ElementConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleInsertQueryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.SuccessFetchResult
[WARN] Unable to detect inner functions for class:org.apache.spark.shuffle.sort.ShuffleInMemorySorter.1
[WARN] Unable to detect inner functions for class:org.apache.spark.network.client.TransportClientFactory.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.errors.TreeNodeException
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.PassThrough.Encoder
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator9$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionModel.IsotonicRegressionModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RefreshTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.RandomForestClassifierWrapper.RandomForestClassifierWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.DecisionTreeClassificationModel.DecisionTreeClassificationModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Index
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator7$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.StringIndexModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.StateStoreType
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.recommendation.MatrixFactorizationModel.SaveLoadV1_0.$typecreator13$1
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.SparkAppHandle.State
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.aggregate.TypedAverage.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.UnsignedPrefixComparatorDescNullsFirst
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutorsOnHost
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.MapConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.CTESubstitution
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TypeConstructorContext
Error instrumenting class:org.apache.spark.SSLOptions
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestDriverStatus
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.streaming.InternalOutputModes.Append
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.orc.OrcDeserializer.ArrayDataUpdater
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.SparkSubmitCommandBuilder.1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CastContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.RelationalGroupedDataset.PivotType
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.ALSModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LocalLDAModel.LocalLDAModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableAliasContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestExecutors
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Logit
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ImplicitOperators
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorSlicer.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StopExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.AFTSurvivalRegressionModelReader
Error instrumenting class:org.apache.spark.sql.execution.streaming.HDFSMetadataLog$FileManager
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.CatalogImpl.$$typecreator2$1
Error instrumenting class:org.apache.spark.input.WholeTextFileInputFormat
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveRdd
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.BooleanExpressionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveMissingReferences
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.UnquotedIdentifierContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.PrimitiveConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveNaturalAndUsingJoin
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.LaunchDriver
Error instrumenting class:org.apache.spark.deploy.history.HistoryServer
Error instrumenting class:org.apache.spark.sql.execution.streaming.ManifestFileCommitProtocol
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.CountVectorizerModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.Word2VecModel.SaveLoadV1_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateValueWatermarkPredicate
Error instrumenting class:org.apache.spark.api.python.TestOutputKeyConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.optimization.NNLS.Workspace
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DataTypeContext
[WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SparkSaslServer.1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QuotedIdentifierContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.Word2VecModel.SaveLoadV1_0
Error instrumenting class:org.apache.spark.api.python.TestWritable
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.BisectingKMeansModel.BisectingKMeansModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SparkAppConfig
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ProbabilisticClassificationModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.WriteStyle.RawStyle
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.SendHeartbeat
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerRemoved
Error instrumenting class:org.apache.spark.deploy.FaultToleranceTest$delayedInit$body
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LogisticRegressionModel.LogisticRegressionModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.Decimal.DecimalAsIfIntegral
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeMax
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.LeftSide
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.optimizer.Optimizer.OptimizeSubqueries
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.stat.StatFunctions.CovarianceCounter
[WARN] Unable to detect inner functions for class:org.apache.spark.annotation.InterfaceStability.Unstable
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RecoverPartitionsContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.MaxAbsScalerModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.SQLConf.ParquetOutputTimestampType
Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.ParquetReadSupport
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.OpenHashSet.Hasher
Error instrumenting class:org.apache.spark.input.StreamBasedRecordReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.executor.Executor.TaskReaper
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.HDFSBackedStateStore.STATE
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.LocalIndexEncoder
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator6$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.Pipeline.SharedReadWrite
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.StandardScalerModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.SQLConf.Replaced
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.StringType.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.InMemoryFileIndex.SerializableFileStatus
[WARN] Unable to detect inner functions for class:org.apache.spark.network.server.RpcHandler.1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.CrossValidator.CrossValidatorWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetMapConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.PCAModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.rules.RuleExecutor.FixedPoint
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterInStandby
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.ProgressReporter.ExecutionStats
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.catalog.InMemoryCatalog.DatabaseDesc
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.ChainedIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillDriverResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.SizeTracker.Sample
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StopWordsRemover.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV1_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.FloatType.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator18$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.python.MLSerDe.SparseVectorPickler
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetLocationsAndStatus
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DereferenceContext
Error instrumenting class:org.apache.spark.sql.execution.datasources.csv.MultiLineCSVDataSource$
Error instrumenting class:org.apache.spark.deploy.security.HBaseDelegationTokenProvider
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveBlock
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.DriverEndpoint
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.kafka010.CachedKafkaConsumer.CacheKey
[WARN] Unable to detect inner functions for class:org.apache.spark.internal.io.FileCommitProtocol.TaskCommitMessage
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetExecutorEndpointRef
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Link
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IntegerLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.GeneralizedLinearRegressionModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.LookupFunctions
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.BooleanAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.streaming.InternalOutputModes.Complete
Error instrumenting class:org.apache.spark.input.StreamFileInputFormat
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator10$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AnalyzeContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.ChiSquareTest.ChiSquareResult
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.TimestampConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowFunctionsContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.DoubleAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StructContext
[WARN] Unable to detect inner functions for class:org.apache.spark.MapOutputTrackerMaster.MessageLoop
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateDatabaseContext
Error instrumenting class:org.apache.spark.ml.image.SamplePathFilter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PredicatedContext
[WARN] Unable to detect inner functions for class:org.apache.spark.network.server.RpcHandler.OneWayRpcCallback
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RelationPrimaryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.rdd.NewHadoopRDD.NewHadoopMapPartitionsWithSplitRDD
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InsertOverwriteDirContext
Error instrumenting class:org.apache.spark.input.FixedLengthBinaryInputFormat
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.scheduler.StreamingListenerBus.WrappedStreamingListenerEvent
Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase$NullIntIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel.BucketedRandomProjectionLSHModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SortItemContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveSubquery
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.attribute.AttributeType.Numeric$2$
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.TreeEnsembleModel.SaveLoadV1_0.EnsembleNodeData
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.Heartbeat
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LinearSVCModel.LinearSVCWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.dstream.ReceiverInputDStream.ReceiverRateController
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.GaussianMixtureModel.SaveLoadV1_0.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Key
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.HashingTF.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.rdd.HadoopRDD.HadoopMapPartitionsWithSplitRDD
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Word2VecModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.graphx.util.BytecodeUtils.MethodInvocationFinder
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.TreeEnsembleModel.SaveLoadV1_0.$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.$$typecreator30$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.BooleanEquality
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.ByteConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.VectorIndexerModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SparkSaslServer.$DigestCallbackHandler
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.2
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinExec.OneSideHashJoiner
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.OpenHashSet.IntHasher
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.HDFSBackedStateStore.ABORTED
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.ByteType.$$typecreator1$1
Error instrumenting class:org.apache.spark.sql.execution.datasources.PartitioningUtils$
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillDriver
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.trees.TreeNodeRef
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.MutableProjection
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveDeserializer
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.UpdateDelegationTokens
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.IDFModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.network.protocol.Encoders.ByteArrays
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDe.SparseMatrixPickler
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PartitionSpecContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.PCAModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.FetchRequest
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.SummarizerBuffer
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.UncacheTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.plans.logical.statsEstimation.EstimationUtils.OverlappedRange
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions.DslSymbol
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.Max
20/02/19 01:07:12 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.StateStoreAwareZipPartitionsRDD
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.StreamingJoinStrategy
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManager.ShuffleMetricsSource
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeansModel.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ComplexDataTypeContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.InMemoryScans
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.FixNullability
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.aggregate.ApproxCountDistinctForIntervals.LongArrayInternalRow
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.ALSWrapper.ALSWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ValueExpressionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.Schema
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QuotedIdentifierAlternativeContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.FunctionArgumentConversion
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator10$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterStateResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.InMemoryFileIndex.SerializableFileStatus
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.StructAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.RFormulaModel.RFormulaModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.EnsembleModelReadWrite.$typecreator6$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ProbabilisticClassificationModel.$$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.Aggregation
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.sources.MemorySinkV2.AddedData
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetMemoryStatus
Error instrumenting class:org.apache.spark.ml.source.libsvm.LibSVMFileFormat
[WARN] Unable to detect inner functions for class:org.apache.spark.util.Benchmark.Case
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.AttributeSeq
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.$$typecreator2$1
Error instrumenting class:org.apache.spark.mllib.tree.model.TreeEnsembleModel$SaveLoadV1_0$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator10$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.HDFSBackedStateStore.UPDATING
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.WindowsContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.CrossValidator.CrossValidatorReader
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.recommendation.MatrixFactorizationModel.SaveLoadV1_0.$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestExecutors
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetLocations
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.UnsignedPrefixComparatorDesc
[WARN] Unable to detect inner functions for class:org.apache.spark.ui.JettyUtils.ServletParams
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveAggAliasInGroupBy
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.QuantileDiscretizer.QuantileDiscretizerWriter
Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$FileTypes
Error instrumenting class:org.apache.spark.deploy.rest.RestSubmissionServer
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LinearSVCModel.LinearSVCWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ProbabilisticClassificationModel.$$typecreator6$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDeBase.BasePickler
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Binarizer.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.KMeansModel.Cluster
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.KMeansModel.Cluster
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinHashLSHModel.MinHashLSHModelWriter.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.SpecialLimits
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeM2n
[WARN] Unable to detect inner functions for class:org.apache.spark.util.Benchmark.Case
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.SQLConf.PartitionOverwriteMode
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LinearSVCWrapper.LinearSVCWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.SparkConf.DeprecatedConfig
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterChanged
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.NaiveBayesWrapper.NaiveBayesWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.RandomForestClassifierWrapper.RandomForestClassifierWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.NaiveBayesModel.NaiveBayesModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.SubmitDriverResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.MaxAbsScalerModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.StringIndexModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.Schema
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.ArrowVectorAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel.MultilayerPerceptronClassificationModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GBTRegressionModel.GBTRegressionModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NamedWindowContext
Error instrumenting class:org.apache.spark.sql.execution.command.CommandUtils$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.IdentityConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CacheTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.kafka010.CachedKafkaConsumer.CacheKey
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisteredExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.Shutdown
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.ExternalShuffleBlockHandler.1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.streaming.InternalOutputModes.Update
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArray.SpillableArrayIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetMapConverter.$KeyValueConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.network.protocol.Encoders.StringArrays
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GaussianMixtureWrapper.GaussianMixtureWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.aggregate.TypedAverage.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.OneHotEncoderModel.OneHotEncoderModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.DecisionTreeClassifierWrapper.DecisionTreeClassifierWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManager.RemoteBlockTempFileManager
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StorageHandlerContext
Error instrumenting class:org.apache.spark.input.Configurable
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.ChiSqSelectorModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.InMemoryStore.1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LDAWrapper.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.DataType.JSortedObject
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.Decimal.DecimalIsFractional
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.PowerIterationClustering.Assignment
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.DecimalType.Expression
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.RatingBlock
Error instrumenting class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.OneForOneBlockFetcher.$ChunkCallback
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DropFunctionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FrameBoundContext
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.SignedPrefixComparator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DescribeDatabaseContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.NodeData
[WARN] Unable to detect inner functions for class:org.apache.spark.SparkConf.AlternateConfig
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.LinearRegressionModel.LinearRegressionModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisteredApplication
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.$typecreator6$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.TrainValidationSplit.TrainValidationSplitWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.UnsignedPrefixComparatorNullsLast
Error instrumenting class:org.apache.spark.sql.execution.datasources.csv.TextInputCSVDataSource$
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.$typecreator8$1
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.ExternalShuffleBlockResolver.$2
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.VertexData
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ExecutorAdded
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateViewContext
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetBlockStatus
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveFunctions
[WARN] Unable to detect inner functions for class:org.apache.spark.shuffle.sort.ShuffleInMemorySorter.SortComparator
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.StatefulAggregationStrategy
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.FPGrowthWrapper.FPGrowthWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableIdentifierContext
Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$FileTypes$
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDe.RatingPickler
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LDAModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.SplitData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetOperationContext
[WARN] Unable to detect inner functions for class:org.apache.spark.network.protocol.MessageDecoder.1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.LocalLDAModel.SaveLoadV1_0.Data$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GaussianMixtureWrapper.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.StateStore.MaintenanceTask
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerLatestState
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DescribeColNameContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.JoinTypeContext
Error instrumenting class:org.apache.spark.sql.execution.datasources.orc.OrcFileFormat
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.DecisionTreeRegressionModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.joins.BuildSide
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRestModel.OneVsRestModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.unsafe.map.BytesToBytesMap.1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ExecutorAdded
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.ALSWrapper.ALSWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestSubmitDriver
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.KMeansModelWriter.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ArithmeticBinaryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.CatalogImpl.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableFileFormatContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.PredictData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.joins.BuildRight
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.InConversion
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ColumnPruner.ColumnPrunerWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ExplainContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV1_0.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.WriteStyle.FlattenStyle
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV1_0.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.GeneralizedLinearRegressionModelWriter.Data
Error instrumenting class:org.apache.spark.ui.JettyUtils$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.UseContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeKVExternalSorter.$KVSorterIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalSorter.SpillableIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherServer.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator6$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SparkSession.implicits
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveOrdinalInOrderByAndGroupBy
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.ChiSqSelectorModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.attribute.AttributeType.Binary$2$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator2$1
Error instrumenting class:org.apache.spark.input.FixedLengthBinaryRecordReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.VectorIndexerModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.Pipeline.PipelineWriter
Error instrumenting class:org.apache.spark.internal.io.HadoopMapRedWriteConfigUtil
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator8$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QueryPrimaryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.StopAppClient
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.PowerIterationClusteringModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.LongAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.CoalesceExec.EmptyPartition
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.NaiveBayesWrapper.NaiveBayesWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Identity
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.QueryExecution.debug
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.debug.DebugExec
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.GroupingSetContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.ShortAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.RatingBlock
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.MetricsAggregate
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.EnsembleModelReadWrite.$typecreator10$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QueryNoWithContext
Error instrumenting class:org.apache.spark.sql.execution.datasources.PartitionPath$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ResourceContext
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetPeers
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.LevelDBTypeInfo.$Index
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.IsotonicRegressionModel.SaveLoadV1_0.Data$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.StructConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ConstantContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.KMeansModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions.DslExpression
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerSchedulerStateResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.ArrayAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Subscript
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveReferences
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.CoalesceExec.EmptyRDDWithPartitions
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.SignedPrefixComparatorDescNullsFirst
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.VectorIndexerModelWriter.Data
Error instrumenting class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$StoreFile$
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ExecutorStateChanged
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PredicateContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.MinMaxScalerModelWriter.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinHashLSHModel.MinHashLSHModelWriter.Data
Error instrumenting class:org.apache.spark.sql.catalyst.parser.ParserUtils$EnhancedLogicalPlan$
[WARN] Unable to detect inner functions for class:org.apache.spark.shuffle.sort.ShuffleInMemorySorter.ShuffleSorterIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalAppendOnlyMap.HashComparator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.ConcatCoercion
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestKillDriver
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestSubmitDriver
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.optimizer.JoinReorderDP.JoinPlan
Error instrumenting class:org.apache.spark.deploy.worker.ui.WorkerWebUI
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.receiver.BlockGenerator.GeneratorState
[WARN] Unable to detect inner functions for class:org.apache.spark.InternalAccumulator.shuffleWrite
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.python.MLSerDe.DenseMatrixPickler
Error instrumenting class:org.apache.spark.metrics.MetricsSystem
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDe.DenseVectorPickler
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.IDFModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.LaunchExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.util.DefaultParamsReader.Metadata
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.GeneralizedLinearRegressionModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0.$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Interaction.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.xml.UDFXPathUtil.ReusableStringReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.AFTSurvivalRegressionModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StringLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeansModel.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.BoundPortsResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions.ImplicitAttribute
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel.BucketedRandomProjectionLSHModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.util.Utils.Lock
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.debug.DebugExec.$ColumnMetrics$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.BisectingKMeansModel.BisectingKMeansModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.BooleanValueContext
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.mesos.MesosExternalShuffleClient.$RegisterDriverCallback
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.streaming.StreamingQueryListener.QueryStartedEvent
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.VertexData
[WARN] Unable to detect inner functions for class:org.apache.spark.util.JsonProtocol.TASK_END_REASON_FORMATTED_CLASS_NAMES
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.FlatMapGroupsWithStateStrategy
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.EltCoercion
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.protocol.BlockTransferMessage.Type
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InsertOverwriteTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ExtractGenerator
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.CompleteRecovery
[WARN] Unable to detect inner functions for class:org.apache.spark.graphx.impl.ShippableVertexPartition.ShippableVertexPartitionOpsConstructor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.$$typecreator22$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterWorker
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.FPGrowthWrapper.FPGrowthWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.StandardScalerModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.EquivalentExpressions.Expr
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.OpenHashMapBasedStateMap.LimitMarker
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SampleByPercentileContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.StringIndexerModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ComplexColTypeListContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.$$typecreator2$1
Error instrumenting class:org.apache.spark.sql.execution.datasources.json.JsonFileFormat
[WARN] Unable to detect inner functions for class:org.apache.spark.status.ElementTrackingStore.Trigger
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveSubqueryColumnAliases
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.FetchResult
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.BinaryType.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Gaussian
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowColumnsContext
[WARN] Unable to detect inner functions for class:org.apache.spark.network.util.LevelDBProvider.StoreVersion
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.UncompressedInBlockSort
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LDA.LDAReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAssembler.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.EquivalentExpressions.Expr
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.VectorIndexerModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Log
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ApplicationFinished
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.ResolveHints.RemoveAllHints
Error instrumenting class:org.apache.spark.sql.execution.streaming.FileStreamSinkLog
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.Word2VecModel.SaveLoadV1_0.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Tweedie
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalAppendOnlyMap.ExternalIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.LogicalNotContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.StandardScalerModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.util.SizeEstimator.ClassInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.LaunchTask
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.RepeatedConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.HasCachedBlocks
[WARN] Unable to detect inner functions for class:org.apache.spark.graphx.PartitionStrategy.RandomVertexCut
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.HintContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.StandardScalerModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.scheduler.StreamingListenerBus.WrappedStreamingListenerEvent
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.PowerIterationClustering.Assignment
[WARN] Unable to detect inner functions for class:org.apache.spark.unsafe.types.UTF8String.IntWrapper
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Word2VecModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterClusterManager
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GBTClassifierWrapper.GBTClassifierWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.network.server.OneForOneStreamManager.StreamState
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.RelationalGroupedDataset.GroupByType
[WARN] Unable to detect inner functions for class:org.apache.spark.unsafe.types.UTF8String.LongWrapper
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FileFormatContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionModel.IsotonicRegressionModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.HasCachedBlocks
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.StreamingRelationStrategy
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.ParserUtils.EnhancedLogicalPlan
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetLocationsMultipleBlockIds
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterWorkerFailed
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.IsotonicRegressionModel.SaveLoadV1_0.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IdentifierListContext
Error instrumenting class:org.apache.spark.mllib.clustering.DistributedLDAModel$SaveLoadV1_0$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.InMemoryFileIndex.SerializableBlockLocation
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyWithIndexToValueStore
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ColTypeListContext
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.ArrayWrappers.ComparableIntArray
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateKeyWatermarkPredicate
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Named
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SortPrefixUtils.NoOpPrefixComparator
Error instrumenting class:org.apache.spark.deploy.security.HadoopFSDelegationTokenProvider
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAttributeRewriter.VectorAttributeRewriterReader
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.CheckForWorkerTimeOut
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetDecimalConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.KVTypeInfo.$MethodAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.ReviveOffers
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.BooleanBitSet.Decoder
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterWorker
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetLongDictionaryAwareDecimalConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.OutputCommitCoordinator.OutputCommitCoordinatorEndpoint
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.TreeEnsembleModel.SaveLoadV1_0.EnsembleNodeData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.DoubleConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.LeastSquaresNESolver
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.MinMaxScalerModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.MetricsAggregate
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.FileBasedWriteAheadLog.LogInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.ExecutorAllocationManager.ExecutorAllocationListener
Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat
Error instrumenting class:org.apache.spark.metrics.sink.MetricsServlet
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TablePropertyListContext
[WARN] Unable to detect inner functions for class:org.apache.spark.util.JsonProtocol.SPARK_LISTENER_EVENT_FORMATTED_CLASS_NAMES
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StopWordsRemover.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.serializer.SerializationDebugger.ListObjectOutputStream
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ProbabilisticClassificationModel.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherProtocol.Message
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestDriverStatus
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator11$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.NumNonZeros
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.NullIntolerant
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.RelationalGroupedDataset.PivotType
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.MinMaxScalerModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.CrossValidatorModel.CrossValidatorModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.DiskBlockObjectWriter.$ManualCloseBufferedOutputStream$1
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.Main.1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.IntAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.VariableLengthRowBasedKeyValueBatch.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.PythonMLLibAPI.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.IntegerType.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.BooleanBitSet.Encoder
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Poisson
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.DecisionTreeRegressionModel.DecisionTreeRegressionModelWriter.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.optim.WeightedLeastSquares.QuasiNewton
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeExternalRowSorter.PrefixComputer.Prefix
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.KMeansWrapper.KMeansWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.StringAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.LinearRegressionModel.LinearRegressionModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.CatalogImpl.$$typecreator3$1
Error instrumenting class:org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$
Error instrumenting class:org.apache.spark.sql.execution.streaming.state.StateStore$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.NaiveBayesModel.NaiveBayesModelWriter
Error instrumenting class:org.apache.spark.sql.execution.streaming.HDFSMetadataLog
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.LaunchExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.RetryingBlockFetcher.1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RelationContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillExecutors
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionModel.IsotonicRegressionModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator9$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRestModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Probit
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SampleMethodContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.SubmitDriverResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.BinaryAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PartitionSpecLocationContext
Error instrumenting class:org.apache.spark.sql.execution.streaming.StreamMetadata$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.IntDelta.Encoder
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LDAModel.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.LocalPrefixSpan.ReversedPrefix
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IdentifierCommentContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.RightSide
Error instrumenting class:org.apache.spark.ui.ServerInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StopWordsRemover.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionModel.IsotonicRegressionModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.FileBasedWriteAheadLog.LogInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveWorker
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IdentifierContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.RFormulaModel.RFormulaModelWriter.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.TriggerThreadDump
[WARN] Unable to detect inner functions for class:org.apache.spark.network.protocol.Encoders.Strings
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.FileStreamSource.FileEntry
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.WindowsSubstitution
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalAppendOnlyMap.DiskMapIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.GradientBoostedTreesModel.SaveLoadV1_0
Error instrumenting class:org.apache.spark.sql.execution.datasources.NoopCache$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.ChiSquareTest.$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeExternalRowSorter.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutorsOnHost
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.protocol.BlockTransferMessage.Decoder
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.CholeskySolver
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper.GeneralizedLinearRegressionWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StopDriver
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.BlockLocationsAndStatus
Error instrumenting class:org.apache.spark.sql.execution.datasources.FileFormatWriter$DynamicPartitionWriteTask
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.JoinSelection
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.PrefixSpan.Postfix
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.BucketSpecContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.DriverStateChanged
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinConditionSplitPredicates
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeKVExternalSorter.1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.IDF.DocumentFrequencyAggregator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DoubleLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.GaussianMixtureModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutorFailed
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette.$typecreator1$1
Error instrumenting class:org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LogisticRegressionModel.LogisticRegressionModelWriter.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.serializer.SerializationDebugger.SerializationDebugger
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.receiver.BlockGenerator.Block
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.CountVectorizerModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.FailureFetchResult
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FunctionCallContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorSlicer.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeM2
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.MemorySink.AddedData
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.ReplicateBlock
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillExecutors
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SubqueryExpressionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassReflection
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.debug.DebugQuery
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ImputerModel.ImputerReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateWatermarkPredicates
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.DecimalType.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveShuffle
[WARN] Unable to detect inner functions for class:org.apache.spark.network.crypto.TransportCipher.EncryptedMessage
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.HashingTF.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.LongDelta.Decoder
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.optimizer.StarSchemaDetection.TableAccessCardinality
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.DirectKafkaRateController
[WARN] Unable to detect inner functions for class:org.apache.spark.unsafe.map.BytesToBytesMap.$Location
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase.IntIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.$$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.BisectingKMeansWrapper.BisectingKMeansWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.RowUpdater
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LocalLDAModel.LocalLDAModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel.MultilayerPerceptronClassificationModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ConstantDefaultContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DescribeTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.DictionaryEncoding.Decoder
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.ArrayWrappers.ComparableByteArray
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.InBlock
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.OldData
[WARN] Unable to detect inner functions for class:org.apache.spark.AccumulatorParam.FloatAccumulatorParam
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.EnsembleModelReadWrite.$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.StorageStatus.RddStorageInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.LateralViewContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAttributeRewriter.VectorAttributeRewriterWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ComplexColTypeContext
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.InMemoryStore.InMemoryView
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.RepeatedPrimitiveConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.ShortType.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Binarizer.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.StandardScalerModelWriter.$$typecreator3$1
Error instrumenting class:org.apache.spark.mllib.regression.IsotonicRegressionModel$SaveLoadV1_0$Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Inverse
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.stat.test.ChiSqTest.Method
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ClearCacheContext
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalSorter.SpilledFile
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.python.EvaluatePython.StructTypePickler
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.IDFModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel.BucketedRandomProjectionLSHModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.OneHotEncoder.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Sqrt
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.DirectKafkaInputDStreamCheckpointData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.ArrayConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.plans.logical.statsEstimation.EstimationUtils.OverlappedRange
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.LastContext
[WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SparkSaslClient.1
Error instrumenting class:org.apache.spark.ml.image.SamplePathFilter$
[WARN] Unable to detect inner functions for class:org.apache.spark.graphx.PartitionStrategy.EdgePartition1D
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.ChiSqSelectorModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.UpdateDelegationTokens
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.history.HistoryServerDiskManager.Lease
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.aggregate.HashMapGenerator.Buffer
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.AddWebUIFilter
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ReconnectWorker
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TablePropertyContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerStateResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.SQLConf.Deprecated
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ResetConfigurationContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator13$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IdentifierSeqContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator8$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel.BucketedRandomProjectionLSHModelWriter.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveRelations
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DropDatabaseContext
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.ProgressReporter.ExecutionStats
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.RunLengthEncoding.Encoder
[WARN] Unable to detect inner functions for class:org.apache.spark.graphx.PartitionStrategy.EdgePartition2D
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.OldData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TinyIntLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.StructConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.TimSort.$SortState
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.IsotonicRegressionWrapper.IsotonicRegressionWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ColumnarBatch.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ClassificationModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetBinaryDictionaryAwareDecimalConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.rules.RuleExecutor.FixedPoint
[WARN] Unable to detect inner functions for class:org.apache.spark.rpc.netty.NettyRpcEnv.FileDownloadChannel
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyWithIndexAndValue
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.BooleanConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.FamilyAndLink
[WARN] Unable to detect inner functions for class:org.apache.spark.util.sketch.CountMinSketch.Version
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.debug.DebugExec.$ColumnMetrics
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.VectorizedRleValuesReader.1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.TrainValidationSplitModel.TrainValidationSplitModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.PrefixSpan.Postfix
Error instrumenting class:org.apache.spark.sql.catalyst.util.CompressionCodecs$
[WARN] Unable to detect inner functions for class:org.apache.spark.executor.Executor.TaskRunner
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase.RLEIntIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateTableHeaderContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.streaming.StreamingQueryListener.QueryProgressEvent
[WARN] Unable to detect inner functions for class:org.apache.spark.rdd.HadoopRDD.HadoopMapPartitionsWithSplitRDD
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.WindowSpecContext
[WARN] Unable to detect inner functions for class:org.apache.spark.io.ReadAheadInputStream.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassMethods
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.PCAModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.DoublePrefixComparator
Error instrumenting class:org.apache.spark.sql.execution.datasources.orc.OrcColumnarBatchReader
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ExecutorUpdated
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.LoadDataContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.WriteStyle.QuotedStyle
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.optim.WeightedLeastSquares.Auto
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.LevelDB.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.LinearRegressionModel.LinearRegressionModelWriter.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeNNZ
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.DateType.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.AFTSurvivalRegressionWrapper.AFTSurvivalRegressionWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.WhenClauseContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisteredWorker
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.MaxAbsScalerModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.Heartbeat
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ChangeColumnContext
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.SparkSubmitCommandBuilder.$OptionParser
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator25$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ManageResourceContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ExecutorAllocationManager.ExecutorAllocationManagerSource
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IntervalLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveBroadcast
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.TreeEnsembleModel.SaveLoadV1_0.Metadata$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IntervalFieldContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ArithmeticOperatorContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.MultiInsertQueryBodyContext
[WARN] Unable to detect inner functions for class:org.apache.spark.network.crypto.TransportCipher.DecryptionHandler
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveBlock
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions.DslAttribute
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.ExternalShuffleBlockResolver.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.FileStreamSource.SeenFilesMap
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RowFormatSerdeContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.debug.DebugExec.$SetAccumulator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.JoinCriteriaContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ValueExpressionDefaultContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LDAWrapper.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.MultilayerPerceptronClassifierWrapper.MultilayerPerceptronClassifierWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator19$1
Error instrumenting class:org.apache.spark.mllib.tree.model.TreeEnsembleModel$SaveLoadV1_0$Metadata
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.UnregisterApplication
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Link
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleExpressionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.LevelDBTypeInfo.1
[WARN] Unable to detect inner functions for class:org.apache.spark.SparkConf.AlternateConfig
Error instrumenting class:org.apache.spark.sql.execution.streaming.FileStreamSourceLog
[WARN] Unable to detect inner functions for class:org.apache.spark.util.random.StratifiedSamplingUtils.RandomDataGenerator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.LongType.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.NaiveBayesModel.NaiveBayesModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.api.python.SerDeUtil.AutoBatchedPickler
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.AFTSurvivalRegressionModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.param.shared.SharedParamsCodeGen.ParamDesc
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.CountVectorizerModelWriter.$$typecreator3$1
Error instrumenting class:org.apache.spark.ui.ServerInfo$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.IsotonicRegressionWrapper.IsotonicRegressionWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.$$typecreator38$1
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.GetExecutorLossReason
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.StorageStatus.NonRddStorageInfo
Error instrumenting class:org.apache.spark.sql.execution.streaming.FileStreamSink$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeMean
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRestModel.$$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.util.JsonProtocol.JOB_RESULT_FORMATTED_CLASS_NAMES
Error instrumenting class:org.apache.spark.ui.WebUI
[WARN] Unable to detect inner functions for class:org.apache.spark.status.KVUtils.KVStoreScalaSerializer
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.NormL1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PrimaryExpressionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.IdentityConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ArithmeticUnaryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.CLogLog
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCheckResult.TypeCheckSuccess
Error instrumenting class:org.apache.spark.sql.execution.streaming.CompactibleFileStreamLog
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette.ClusterStats
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexer.CategoryStats
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.FPGrowthModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalSorter.SpilledFile
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase.ValuesReaderIntIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.PCAModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SampleByBucketContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SearchedCaseContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetConfigurationContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.RevokedLeadership
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.RFormulaModel.RFormulaModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.BatchedWriteAheadLog.Record
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ColPositionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.util.QuantileSummaries.Stats
[WARN] Unable to detect inner functions for class:org.apache.spark.graphx.lib.SVDPlusPlus.Conf
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GaussianMixtureWrapper.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.RetryingBlockFetcher.$RetryingBlockFetchListener
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.DoubleType.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SampleByRowsContext
[WARN] Unable to detect inner functions for class:org.apache.spark.shuffle.sort.UnsafeShuffleWriter.$CloseAndFlushShieldOutputStream
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.orc.OrcDeserializer.CatalystDataUpdater
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NonReservedContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.ElectedLeader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ExistsContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.UncompressedInBlock
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.CommandBuilderUtils.JavaVendor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GBTRegressorWrapper.GBTRegressorWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RenameTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Interaction.$$typecreator2$1
Error instrumenting class:org.apache.spark.executor.ExecutorSource
Error instrumenting class:org.apache.spark.sql.execution.datasources.FileFormatWriter$
[WARN] Unable to detect inner functions for class:org.apache.spark.TestUtils.JavaSourceFromString
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.DistributedLDAModel.DistributedWriter
Error instrumenting class:org.apache.spark.sql.execution.datasources.json.MultiLineJsonDataSource$
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NullLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TruncateTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.AccumulatorParam.DoubleAccumulatorParam
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StarContext
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RequestExecutors
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.ChiSqSelectorModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowPartitionsContext
[WARN] Unable to detect inner functions for class:org.apache.spark.AccumulatorParam.LongAccumulatorParam
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.attribute.AttributeType.Nominal$2$
[WARN] Unable to detect inner functions for class:org.apache.spark.api.python.BasePythonRunner.ReaderIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.OutputCommitCoordinator.StageState
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestKillDriver
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SparkSession.Builder
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.python.EvaluatePython.RowPickler
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.TimSort.1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayes.$$typecreator9$1
Error instrumenting class:org.apache.spark.input.StreamRecordReader
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator6$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeExternalRowSorter.RowComparator
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeans.ClusterSummary
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.PassThrough.Decoder
Error instrumenting class:org.apache.spark.sql.execution.datasources.SQLHadoopMapReduceCommitProtocol
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.BasicOperators
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.DistributedLDAModel.DistributedLDAModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.LaunchTask
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.BoundPortsRequest
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.v2.PushDownOperatorsToDataSource.FilterAndProject
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.DataSource.SourceInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.AbstractLauncher.ArgumentValidator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SimpleCaseContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.EnsembleModelReadWrite.$typecreator11$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveWindowFrame
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NestedConstantListContext
Error instrumenting class:org.apache.spark.api.python.JavaToWritableConverter
Error instrumenting class:org.apache.spark.sql.execution.datasources.PartitionDirectory$
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterClusterManager
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Family
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QueryOrganizationContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.FamilyAndLink
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkDirCleanup
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.NaiveBayesModel.NaiveBayesModelWriter.Data
Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.$SpillableIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalAppendOnlyMap.ExternalIterator.$StreamBuffer
[WARN] Unable to detect inner functions for class:org.apache.spark.api.python.BasePythonRunner.WriterThread
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.MaxAbsScalerModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.rpc.netty.RpcEndpointVerifier.CheckExistence
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.PipelineModel.PipelineModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.mesos.MesosExternalShuffleClient.$Heartbeater
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel.MultilayerPerceptronClassificationModelWriter.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRest.OneVsRestReader
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.ExternalShuffleBlockHandler.$ManagedBufferIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ProbabilisticClassificationModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator7$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.CatalogImpl.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.NaiveBayesModel.NaiveBayesModelWriter.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.NNLSSolver
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeans.ClusterSummary
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.DecisionTreeRegressionModel.DecisionTreeRegressionModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.network.util.NettyUtils.1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateTableLikeContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.python.MLSerDe.DenseVectorPickler
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ColumnPruner.ColumnPrunerWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.InMemoryStore.InMemoryIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.api.python.SerDeUtil.ByteArrayConstructor
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.LocalLDAModel.SaveLoadV1_0.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinSide
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.DictionaryEncoding.Encoder
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.catalog.InMemoryCatalog.TableDesc
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.dstream.FileInputDStream.FileInputDStreamCheckpointData
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinHashLSHModel.MinHashLSHModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SkewSpecContext
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.GetExecutorLossReason
Error instrumenting class:org.apache.spark.mllib.clustering.GaussianMixtureModel$SaveLoadV1_0$Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.aggregate.ApproxCountDistinctForIntervals.LongArrayInternalRow
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.BooleanDefaultContext
Error instrumenting class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$StoreFile
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.Projection
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.RandomForestRegressorWrapper.RandomForestRegressorWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.AccumulatorParam.IntAccumulatorParam
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.LinearRegressionModel.LinearRegressionModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.Word2VecModel.SaveLoadV1_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherBackend.BackendConnection
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetIntDictionaryAwareDecimalConverter
Error instrumenting class:org.apache.spark.mllib.clustering.DistributedLDAModel$SaveLoadV1_0$EdgeData
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.PrefixSpan.Prefix
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.BooleanType.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetExecutorEndpointRef
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.HintStatementContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LinearSVCWrapper.LinearSVCWrapperWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.rdd.JdbcRDD.ConnectionFactory
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.OrderedIdentifierListContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeKVExternalSorter.KVComparator
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAssembler.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.annotation.InterfaceStability.Stable
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateValueWatermarkPredicate
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveWindowOrder
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.BlacklistTracker.ExecutorFailureList.TaskId
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.TreeEnsembleModel.SaveLoadV1_0.$typecreator1$1
Error instrumenting class:org.apache.spark.sql.execution.datasources.json.TextInputJsonDataSource$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.optim.WeightedLeastSquares.Solver
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TablePropertyValueContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NamedExpressionSeqContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator11$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FunctionIdentifierContext
[WARN] Unable to detect inner functions for class:org.apache.spark.network.util.LevelDBProvider.1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.AddWebUIFilter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AddTableColumnsContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleDataTypeContext
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutorFailed
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.joins.BuildLeft
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Wildcard
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetTablePropertiesContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator12$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCheckResult.TypeCheckFailure
[WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SaslEncryption.EncryptionHandler
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionBase.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorSizeHint.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.FailureFetchResult
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.WindowFrameCoercion
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveAliases
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.SignedPrefixComparatorNullsLast
Error instrumenting class:org.apache.spark.sql.execution.datasources.InMemoryFileIndex$
[WARN] Unable to detect inner functions for class:org.apache.spark.serializer.SerializationDebugger.ListObjectOutput
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherServer.$ServerConnection
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RowFormatContext
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetLocationsAndStatus
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.FPTree.Node
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions.DslString
Error instrumenting class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore
Error instrumenting class:org.apache.spark.api.python.TestOutputValueConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.RadixSortSupport
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator6$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.PartitioningUtils.PartitionValues
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterStateResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ExtractGenerator.AliasedGenerator$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.PipelineModel.PipelineModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.AFTSurvivalRegressionWrapper.AFTSurvivalRegressionWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV1_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator15$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StatementDefaultContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IndexToString.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveGenerate
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.FlatMapGroupsWithStateExec.StateStoreUpdater
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ReconnectWorker
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Binomial
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArray.ExternalAppendOnlyUnsafeRowArrayIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.BlockLocationsAndStatus
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QualifiedNameContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions.StringToAttributeConversionHelper
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.MinMaxScalerModelWriter.Data
Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.PullOutNondeterministic
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.DriverStateChanged
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.JoinRelationContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FirstContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.KMeansModelReader.$$typecreator6$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InsertIntoTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetTableLocationContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.LogicalBinaryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.ByteAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.FileFormatWriter.WriteTaskResult
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.rules.RuleExecutor.Batch
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetStorageStatus
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.rdd.NewHadoopRDD.NewHadoopMapPartitionsWithSplitRDD
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.UnsafeInMemorySorter.1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.StateStoreOps
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.util.QuantileSummaries.Stats
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.$typecreator7$1
[WARN] Unable to detect inner functions for class:org.apache.spark.AccumulatorParam.StringAccumulatorParam
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorSizeHint.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.EdgeData$
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDe.DenseMatrixPickler
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SubscriptContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.api.r.SQLUtils.RegexContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterChangeAcknowledged
[WARN] Unable to detect inner functions for class:org.apache.spark.api.python.PythonWorkerFactory.MonitorThread
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.ResolveTableValuedFunctions.ArgumentList
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FunctionTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QueryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinHashLSHModel.MinHashLSHModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerStateResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionModel.IsotonicRegressionModelWriter.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.IsotonicRegressionModel.SaveLoadV1_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.DiskBlockObjectWriter.ManualCloseOutputStream
Error instrumenting class:org.apache.spark.streaming.StreamingContext$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.FeatureHasher.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeWeightSum
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterWorkerResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.DecimalAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0.Data$
Error instrumenting class:org.apache.spark.sql.catalyst.catalog.InMemoryCatalog$
Error instrumenting class:org.apache.spark.streaming.api.java.JavaStreamingContext$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinConditionSplitPredicates
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.recommendation.MatrixFactorizationModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.DecimalConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SaslEncryption.DecryptionHandler
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.InMemoryStore.InstanceList
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.Metric
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.FileFormatWriter.OutputSpec
[WARN] Unable to detect inner functions for class:org.apache.spark.serializer.SerializationDebugger.NullOutputStream
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillDriverResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.codegen.CodegenContext.MutableStateArrays
[WARN] Unable to detect inner functions for class:org.apache.spark.rdd.PipedRDD.NotEqualsFileNameFilter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableNameContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.codegen.DumpByteCode
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DropTablePartitionsContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.Variance
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.kafka010.KafkaRDD.KafkaRDDIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.SparkSubmitUtils.MavenCoordinate
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateTempViewUsingContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.BeginRecovery
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.PythonMLLibAPI.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.rpc.netty.NettyRpcEnv.FileDownloadCallback
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.StringConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.$$typecreator30$1
Error instrumenting class:org.apache.spark.sql.execution.datasources.csv.CSVFileFormat
[WARN] Unable to detect inner functions for class:org.apache.spark.util.Benchmark.Result
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateWatermarkPredicates
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.AFTSurvivalRegressionModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.KVTypeInfo.$FieldAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Power
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Word2VecModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.$$typecreator14$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.GBTClassificationModel.GBTClassificationModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAttributeRewriter.VectorAttributeRewriterWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.serializer.DummySerializerInstance.$1
Error instrumenting class:org.apache.spark.input.ConfigurableCombineFileRecordReader
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.FetchRequest
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.FPTree.Summary
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateFileFormatContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAttributeRewriter.VectorAttributeRewriterWriter.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.scheduler.JobScheduler.JobHandler
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Named
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.$$typecreator38$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.RandomForestRegressionModel.RandomForestRegressionModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.HandleNullInputsForUDF
[WARN] Unable to detect inner functions for class:org.apache.spark.network.protocol.Message.Type
[WARN] Unable to detect inner functions for class:org.apache.spark.graphx.impl.VertexPartition.VertexPartitionOpsConstructor
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.FileFormatWriter.WriteTaskResult
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.OneForOneBlockFetcher.$DownloadCallback
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.GeneralizedLinearRegressionModelWriter.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.internal.io.FileCommitProtocol.EmptyTaskCommitMessage
[WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.LevelDB.TypeAliases
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.DecisionTreeRegressionModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.FileFormatWriter.ExecuteWriteTask
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeMetric
Error instrumenting class:org.apache.spark.sql.execution.streaming.SinkFileStatus$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetArrayConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.ChiSqSelectorModelWriter.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinHashLSHModel.MinHashLSHModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.StackCoercion
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ColumnPruner.ColumnPrunerWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.api.python.BasePythonRunner.MonitorThread
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.VectorIndexerModelWriter.$$typecreator3$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.PowerIterationClusteringModel.SaveLoadV1_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.GlobalAggregates
Error instrumenting class:org.apache.spark.serializer.SerializationDebugger$ObjectStreamClassMethods$
[WARN] Unable to detect inner functions for class:org.apache.spark.util.SizeEstimator.SearchState
[WARN] Unable to detect inner functions for class:org.apache.spark.InternalAccumulator.input
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.ExternalShuffleBlockHandler.$ShuffleMetrics
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateWatermarkPredicate
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LinearSVCModel.LinearSVCReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.orc.OrcDeserializer.RowUpdater
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.FileFormatWriter.WriteJobDescription
Error instrumenting class:org.apache.spark.status.api.v1.ApiRootResource$
[WARN] Unable to detect inner functions for class:org.apache.spark.InternalAccumulator.shuffleRead
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.WindowFrameContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ExecutorUpdated
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.UDTConverter
Error instrumenting class:org.apache.spark.mllib.clustering.LocalLDAModel$SaveLoadV1_0$Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.RandomForestRegressorWrapper.RandomForestRegressorWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InlineTableDefault1Context
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.aggregate.HashMapGenerator.Buffer
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Word2VecModelWriter.$$typecreator10$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.TimestampType.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LogisticRegressionModel.LogisticRegressionModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyAndNumValues
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.EnsembleModelReadWrite.EnsembleNodeData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveAggregateFunctions
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InlineTableDefault2Context
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutors
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.DecisionTreeClassifierWrapper.DecisionTreeClassifierWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ReregisterWithMaster
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.TimestampAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.UnsafeInMemorySorter.SortComparator
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.Rating
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.receiver.ReceiverSupervisor.ReceiverState
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RenameTablePartitionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.security.CryptoStreamUtils.CryptoParams
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherProtocol.Stop
[WARN] Unable to detect inner functions for class:org.apache.spark.util.sketch.BloomFilter.Version
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NumberContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.VectorizedRleValuesReader.MODE
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.KeyWrapper
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.LongConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AlterViewQueryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.stat.test.ChiSqTest.NullHypothesis
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DescribeFuncNameContext
Error instrumenting class:org.apache.spark.SparkContext$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GaussianMixtureWrapper.GaussianMixtureWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.NormalEquation
Error instrumenting class:org.apache.spark.sql.execution.datasources.CodecStreams$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLContext.implicits
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.AFTSurvivalRegressionModelWriter.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.OpenHashMapBasedStateMap.StateInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleStatementContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.BooleanLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.UncompressedInBlockBuilder
[WARN] Unable to detect inner functions for class:org.apache.spark.status.ElementTrackingStore.Trigger
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisteredApplication
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.BlacklistTracker.ExecutorFailureList.TaskId
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRest.OneVsRestWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.aggregate.ApproximatePercentile.PercentileDigestSerializer
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RefreshResourceContext
[WARN] Unable to detect inner functions for class:org.apache.spark.unsafe.map.HashMapGrowthStrategy.Doubling
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LocalLDAModel.LocalLDAModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.$$typecreator6$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.impl.RandomForest.NodeIndexInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.OneHotEncoderModel.OneHotEncoderModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0.$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleFunctionIdentifierContext
[WARN] Unable to detect inner functions for class:org.apache.spark.status.KVUtils.MetadataMismatchException
[WARN] Unable to detect inner functions for class:org.apache.spark.unsafe.map.BytesToBytesMap.$MapIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.optim.QuasiNewtonSolver.NormalEquationCostFun
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.Mean
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowTablesContext
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetLocationsMultipleBlockIds
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.CountVectorizerModelWriter.Data
Error instrumenting class:org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RequestExecutors
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.BeginRecovery
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ApplicationRemoved
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ColumnPruner.ColumnPrunerWriter.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.StateStoreHandler
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.DecisionTreeClassificationModel.DecisionTreeClassificationModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.DecisionTreeModelReadWrite.$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerSchedulerStateResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.OpenHashSet.LongHasher
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestMasterState
Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport
Error instrumenting class:org.apache.spark.ui.SparkUI
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveWorker
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.aggregate.DeclarativeAggregate.RichAttribute
Error instrumenting class:org.apache.spark.sql.catalyst.catalog.ExternalCatalogUtils$
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.SizeTracker.Sample
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ConstantListContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillTask
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherProtocol.SetAppId
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.ToBlockManagerMaster
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRestModel.OneVsRestModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeL1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.ConcatCoercion
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ColumnPruner.ColumnPrunerReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveUpCast
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolvePivot
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArray.InMemoryBufferIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.stat.FrequentItems.FreqItemCounter
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.StringPrefixComparator
[WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.BatchedWriteAheadLog.Record
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.Word2VecModel.SaveLoadV1_0.$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.GenericFileFormatContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetTableSerDeContext
[WARN] Unable to detect inner functions for class:org.apache.spark.shuffle.sort.UnsafeShuffleWriter.MyByteArrayOutputStream
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.WindowRefContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowTblPropertiesContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LocalLDAModel.LocalLDAModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.UDTConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.ShortConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.IfCoercion
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetStringConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.streaming.StreamingQueryListener.Event
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.FPGrowth.FreqItemset
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.RandomForestClassificationModel.RandomForestClassificationModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterApplication
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.NormL2
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeMin
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.history.AppListingListener.MutableApplicationInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StatementContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AliasedQueryContext
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.LaunchDriver
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.Decimal.DecimalIsConflicted
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.GaussianMixtureModelWriter.$$typecreator5$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.IDFModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManager.RemoteBlockTempFileManager.$ReferenceWithCleanup
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.$$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SaslEncryption.EncryptedMessage
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.OutputCommitCoordinator.StageState
[WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.ExternalShuffleBlockResolver.AppExecId
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetBlockStatus
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SampleContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator14$1
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PositionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.IntConverter
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DecimalLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDe.SparseVectorPickler
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Word2VecModelReader.$$typecreator17$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.attribute.AttributeType.Unresolved$2$
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.GaussianMixtureModel.SaveLoadV1_0.Data$
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.FileStreamSource.FileEntry
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.BigDecimalLiteralContext
[WARN] Unable to detect inner functions for class:org.apache.spark.rpc.netty.Dispatcher.MessageLoop
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetPeers
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Gamma
Error instrumenting class:org.apache.spark.input.WholeTextFileRecordReader
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.RatingBlockBuilder
Error instrumenting class:org.apache.spark.internal.io.HadoopMapReduceCommitProtocol
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.StringToColumn
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveBroadcast
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.PredictData
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LocalLDAModel.LocalLDAModelWriter.$$typecreator4$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.MinMaxScalerModelWriter
[WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RowFormatDelimitedContext
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherProtocol.SetState
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.LocalPrefixSpan.ReversedPrefix
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.OneHotEncoderModel.$$typecreator1$1
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.BoundPortsResponse
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator11$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0.$typecreator2$1
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.GaussianMixtureModelWriter.Data
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.RandomForestModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.FloatAccessor
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ExecutorStateChanged
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.LinearRegressionModel.LinearRegressionModelReader
[WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalAppendOnlyMap.SpillableIterator
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyWithIndexToValueType
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.ReplicateBlock
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator15$1
[WARN] Unable to detect inner functions for class:org.apache.spark.network.server.TransportRequestHandler.$1
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.PrefixSpanModel.SaveLoadV1_0
[WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ApplicationFinished
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GBTClassifierWrapper.GBTClassifierWrapperReader
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PrimitiveDataTypeContext
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.DecimalType.Fixed
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DescribeFunctionContext
[WARN] Unable to detect inner functions for class:org.apache.spark.storage.StorageStatus.RddStorageInfo
[WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.LogisticRegressionWithLBFGS.$$typecreator1$1
Error instrumenting class:org.apache.spark.sql.execution.datasources.text.TextFileFormat
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData
[WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowCreateTableContext
[WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Word2VecModelWriter.$$typecreator9$1
Created : .generated-mima-class-excludes in current directory.
Created : .generated-mima-member-excludes in current directory.
Using /usr/java/jdk1.8.0_191 as default JAVA_HOME.
Note, this will be overridden by -java-home if it is set.
[info] Loading project definition from /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/project
[info] Set current project to spark-parent (in build file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/)
[info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}tools...
[info] spark-parent: previous-artifact not set, not analyzing binary compatibility
[info] spark-tags: previous-artifact not set, not analyzing binary compatibility
[info] Done updating.
[info] spark-tools: previous-artifact not set, not analyzing binary compatibility
[info] spark-kvstore: previous-artifact not set, not analyzing binary compatibility
[info] spark-unsafe: previous-artifact not set, not analyzing binary compatibility
[info] spark-streaming-flume-sink: previous-artifact not set, not analyzing binary compatibility
[info] spark-network-common: previous-artifact not set, not analyzing binary compatibility
[info] spark-network-shuffle: previous-artifact not set, not analyzing binary compatibility
[info] spark-network-yarn: previous-artifact not set, not analyzing binary compatibility
[info] spark-sketch: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-sketch_2.11:2.3.0  (filtered 1)
[info] spark-launcher: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-launcher_2.11:2.3.0  (filtered 1)
[info] spark-mllib-local: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-mllib-local_2.11:2.3.0  (filtered 1)
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:161: method isRunningLocally in class TaskContext is deprecated: Local execution was removed, so this always returns false
[warn]   override def isRunningLocally(): Boolean = taskContext.isRunningLocally()
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/scheduler/StageInfo.scala:59: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]   def attemptNumber(): Int = attemptId
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/api/r/RBackend.scala:102: method childGroup in class ServerBootstrap is deprecated: see corresponding Javadoc for more information.
[warn]     if (bootstrap != null && bootstrap.childGroup() != null) {
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/spark-core_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[info] spark-ganglia-lgpl: previous-artifact not set, not analyzing binary compatibility
[info] spark-kubernetes: previous-artifact not set, not analyzing binary compatibility
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] spark-yarn: previous-artifact not set, not analyzing binary compatibility
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:87: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       fwInfoBuilder.setRole(role)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:224: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]     role.foreach { r => builder.setRole(r) }
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:260: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]             Option(r.getRole), reservation)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:263: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]             Option(r.getRole), reservation)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:521: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       role.foreach { r => builder.setRole(r) }
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:537: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]         (RoleResourceInfo(resource.getRole, reservation),
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] spark-mesos: previous-artifact not set, not analyzing binary compatibility
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] spark-catalyst: previous-artifact not set, not analyzing binary compatibility
[info] spark-streaming: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-streaming_2.11:2.3.0  (filtered 3)
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/DirectKafkaInputDStream.scala:172: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]     val msgs = c.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:100: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:153: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaDataConsumer.scala:200: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     val p = consumer.poll(timeout)
[warn] 
[info] spark-streaming-kafka-0-10-assembly: previous-artifact not set, not analyzing binary compatibility
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:259: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val dstream = FlumeUtils.createStream(jssc, hostname, port, storageLevel, enableDecompression)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:275: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val dstream = FlumeUtils.createPollingStream(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createPollingStream(ssc, host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createPollingStream(ssc, host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumeEventCount.scala:59: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createStream(ssc, host, port, StorageLevel.MEMORY_ONLY_SER_2)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:633: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder](
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:647: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:648: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker]): JavaRDD[(Array[Byte], Array[Byte])] = {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:657: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:658: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker]): JavaRDD[Array[Byte]] = {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:670: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:671: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:673: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createRDD[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V](
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:676: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges.toArray(new Array[OffsetRange](offsetRanges.size())),
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:720: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val kc = new KafkaCluster(Map(kafkaParams.asScala.toSeq: _*))
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:721: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       KafkaUtils.getFromOffsets(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:725: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createDirectStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V](
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def createBroker(host: String, port: JInt): Broker = Broker(host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: object Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def createBroker(host: String, port: JInt): Broker = Broker(host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:740: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def offsetRangesOfKafkaRDD(rdd: RDD[_]): JList[OffsetRange] = {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:89: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   protected val kc = new KafkaCluster(kafkaParams)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:172: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       OffsetRange(tp.topic, tp.partition, fo, uo.offset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:217: object KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val leaders = KafkaCluster.checkErrors(kc.findLeaders(topics))
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:222: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]            context.sparkContext, kafkaParams, b.map(OffsetRange(_)), leaders, messageHandler)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:58: trait HasOffsetRanges in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   ) extends RDD[R](sc, Nil) with Logging with HasOffsetRanges {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val offsetRanges: Array[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val offsetRanges: Array[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:152: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val kc = new KafkaCluster(kafkaParams)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:268: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]         OffsetRange(tp.topic, tp.partition, fo, uo.offset)
[warn] 
[info] spark-streaming-flume: previous-artifact not set, not analyzing binary compatibility
[info] spark-streaming-kafka-0-8: previous-artifact not set, not analyzing binary compatibility
[info] spark-graphx: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-graphx_2.11:2.3.0  (filtered 3)
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:140: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]   private val client = new AmazonKinesisClient(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:150: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]   client.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:219: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information.
[warn]     getRecordsRequest.setRequestCredentials(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:240: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information.
[warn]     getShardIteratorRequest.setRequestCredentials(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:606: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]       KinesisUtils.createStream(jssc.ssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:613: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]         KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:616: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]         KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:107: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val kinesisClient = new AmazonKinesisClient(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:108: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     kinesisClient.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:223: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val kinesisClient = new AmazonKinesisClient(new DefaultAWSCredentialsProviderChain())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:224: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     kinesisClient.setEndpoint(endpoint)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/SparkAWSCredentials.scala:76: method withLongLivedCredentialsProvider in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       .withLongLivedCredentialsProvider(longLivedCreds.provider)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:58: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val client = new AmazonKinesisClient(KinesisTestUtils.getAWSCredentials())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:59: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     client.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:64: constructor AmazonDynamoDBClient in class AmazonDynamoDBClient is deprecated: see corresponding Javadoc for more information.
[warn]     val dynamoDBClient = new AmazonDynamoDBClient(new DefaultAWSCredentialsProviderChain())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:65: method setRegion in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     dynamoDBClient.setRegion(RegionUtils.getRegion(regionName))
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisReceiver.scala:187: constructor Worker in class Worker is deprecated: see corresponding Javadoc for more information.
[warn]     worker = new Worker(recordProcessorFactory, kinesisClientLibConfiguration)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] spark-streaming-kinesis-asl: previous-artifact not set, not analyzing binary compatibility
[info] spark-streaming-kafka-0-10: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-streaming-kafka-0-10_2.11:2.3.0  (filtered 6)
[info] spark-streaming-flume-assembly: previous-artifact not set, not analyzing binary compatibility
[info] spark-streaming-kinesis-asl-assembly: previous-artifact not set, not analyzing binary compatibility
[info] spark-streaming-kafka-0-8-assembly: previous-artifact not set, not analyzing binary compatibility
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:128: value ENABLE_JOB_SUMMARY in object ParquetOutputFormat is deprecated: see corresponding Javadoc for more information.
[warn]       && conf.get(ParquetOutputFormat.ENABLE_JOB_SUMMARY) == null) {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:360: class ParquetInputSplit in package hadoop is deprecated: see corresponding Javadoc for more information.
[warn]         new org.apache.parquet.hadoop.ParquetInputSplit(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:371: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]         ParquetFileReader.readFooter(sharedConf, filePath, SKIP_ROW_GROUPS).getFileMetaData
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:544: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]           ParquetFileReader.readFooter(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[info] spark-avro: previous-artifact not set, not analyzing binary compatibility
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:119: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]     consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:139: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:187: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]       consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:217: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]       consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:293: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]           consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaDataConsumer.scala:470: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     val p = consumer.poll(pollTimeoutMs)
[warn] 
[info] spark-sql-kafka-0-10: previous-artifact not set, not analyzing binary compatibility
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:138: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn] object OneHotEncoder extends DefaultParamsReadable[OneHotEncoder] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:141: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]   override def load(path: String): OneHotEncoder = super.load(path)
[warn] 
[info] spark-hive: previous-artifact not set, not analyzing binary compatibility
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:53: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path
[warn]       if (addedClasspath != "") {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:54: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path
[warn]         settings.classpath append addedClasspath
[warn] 
[info] spark-repl: previous-artifact not set, not analyzing binary compatibility
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] spark-hive-thriftserver: previous-artifact not set, not analyzing binary compatibility
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/assembly/target/scala-2.11/spark-assembly_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[info] spark-assembly: previous-artifact not set, not analyzing binary compatibility
[info] spark-core: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-core_2.11:2.3.0  (filtered 902)
[info] Compiling 1 Scala source to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/examples/target/scala-2.11/classes...
[info] spark-sql: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-sql_2.11:2.3.0  (filtered 289)
[info] spark-mllib: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-mllib_2.11:2.3.0  (filtered 513)
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/examples/target/scala-2.11/spark-examples_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[info] spark-examples: previous-artifact not set, not analyzing binary compatibility
[success] Total time: 52 s, completed Feb 19, 2020 1:08:22 AM
[info] Building Spark assembly (w/Hive 1.2.1) using SBT with these arguments:  -Phadoop-2.6 -Pkubernetes -Phive-thriftserver -Pflume -Pkinesis-asl -Pyarn -Pkafka-0-8 -Pspark-ganglia-lgpl -Phive -Pmesos assembly/package
Using /usr/java/jdk1.8.0_191 as default JAVA_HOME.
Note, this will be overridden by -java-home if it is set.
[info] Loading project definition from /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/project
[info] Set current project to spark-parent (in build file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/)
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:161: method isRunningLocally in class TaskContext is deprecated: Local execution was removed, so this always returns false
[warn]   override def isRunningLocally(): Boolean = taskContext.isRunningLocally()
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/scheduler/StageInfo.scala:59: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]   def attemptNumber(): Int = attemptId
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/api/r/RBackend.scala:102: method childGroup in class ServerBootstrap is deprecated: see corresponding Javadoc for more information.
[warn]     if (bootstrap != null && bootstrap.childGroup() != null) {
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/spark-core_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:87: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       fwInfoBuilder.setRole(role)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:224: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]     role.foreach { r => builder.setRole(r) }
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:260: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]             Option(r.getRole), reservation)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:263: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]             Option(r.getRole), reservation)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:521: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       role.foreach { r => builder.setRole(r) }
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:537: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]         (RoleResourceInfo(resource.getRole, reservation),
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:128: value ENABLE_JOB_SUMMARY in object ParquetOutputFormat is deprecated: see corresponding Javadoc for more information.
[warn]       && conf.get(ParquetOutputFormat.ENABLE_JOB_SUMMARY) == null) {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:360: class ParquetInputSplit in package hadoop is deprecated: see corresponding Javadoc for more information.
[warn]         new org.apache.parquet.hadoop.ParquetInputSplit(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:371: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]         ParquetFileReader.readFooter(sharedConf, filePath, SKIP_ROW_GROUPS).getFileMetaData
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:544: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]           ParquetFileReader.readFooter(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:138: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn] object OneHotEncoder extends DefaultParamsReadable[OneHotEncoder] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:141: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]   override def load(path: String): OneHotEncoder = super.load(path)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:53: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path
[warn]       if (addedClasspath != "") {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:54: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path
[warn]         settings.classpath append addedClasspath
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/assembly/target/scala-2.11/jars/spark-assembly_2.11-2.4.6-SNAPSHOT.jar ...
[info] Done packaging.
[success] Total time: 21 s, completed Feb 19, 2020 1:08:56 AM

========================================================================
Running Java style checks
========================================================================
Checkstyle checks passed.

========================================================================
Running Spark unit tests
========================================================================
[info] Running Spark tests using SBT with these arguments:  -Phadoop-2.6 -Pkubernetes -Pflume -Phive-thriftserver -Pyarn -Pkafka-0-8 -Pspark-ganglia-lgpl -Pkinesis-asl -Phive -Pmesos test
Using /usr/java/jdk1.8.0_191 as default JAVA_HOME.
Note, this will be overridden by -java-home if it is set.
[info] Loading project definition from /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/project
[info] Set current project to spark-parent (in build file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/)
[info] ScalaTest
[info] ScalaTest
[info] ScalaTest
[info] Run completed in 59 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] Run completed in 111 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] Run completed in 112 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] ScalaTest
[info] Run completed in 21 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] Test run started
[info] Test org.apache.spark.launcher.InProcessLauncherSuite.testKill started
[info] Test org.apache.spark.launcher.InProcessLauncherSuite.testLauncher started
[info] Test org.apache.spark.launcher.InProcessLauncherSuite.testErrorPropagation started
[info] Test run finished: 0 failed, 0 ignored, 3 total, 0.112s
[info] UTF8StringPropertyCheckSuite:
[info] - toString (169 milliseconds)
[info] - numChars (8 milliseconds)
[info] - startsWith (14 milliseconds)
[info] - endsWith (7 milliseconds)
[info] - toUpperCase (6 milliseconds)
[info] - toLowerCase (3 milliseconds)
[info] - compare (7 milliseconds)
[info] - substring (42 milliseconds)
[info] - contains (26 milliseconds)
[info] - trim, trimLeft, trimRight (82 milliseconds)
[info] - reverse (4 milliseconds)
[info] - indexOf (15 milliseconds)
[info] - repeat (81 milliseconds)
[info] - lpad, rpad (4 milliseconds)
[info] - concat (41 milliseconds)
[info] - concatWs (37 milliseconds)
[info] - split !!! IGNORED !!!
[info] - levenshteinDistance (8 milliseconds)
[info] - hashCode (2 milliseconds)
[info] - equals (2 milliseconds)
[info] Test run started
[info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.addTest started
[info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.fromStringTest started
[info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.equalsTest started
[info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.fromYearMonthStringTest started
[info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.toStringTest started
[info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.subtractTest started
[info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.fromCaseInsensitiveStringTest started
[info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.fromSingleUnitStringTest started
[info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.fromDayTimeStringTest started
[info] Test run finished: 0 failed, 0 ignored, 9 total, 0.011s
[info] Test run started
[info] Test org.apache.spark.unsafe.array.LongArraySuite.basicTest started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.001s
[info] Test run started
[info] Test org.apache.spark.unsafe.PlatformUtilSuite.freeingOnHeapMemoryBlockResetsBaseObjectAndOffset started
[info] Test org.apache.spark.unsafe.PlatformUtilSuite.overlappingCopyMemory started
[info] Test org.apache.spark.unsafe.PlatformUtilSuite.memoryDebugFillEnabledInTest started
[info] Test org.apache.spark.unsafe.PlatformUtilSuite.offHeapMemoryAllocatorThrowsAssertionErrorOnDoubleFree started
[info] Test org.apache.spark.unsafe.PlatformUtilSuite.heapMemoryReuse started
[info] Test org.apache.spark.unsafe.PlatformUtilSuite.onHeapMemoryAllocatorPoolingReUsesLongArrays started
[info] Test org.apache.spark.unsafe.PlatformUtilSuite.onHeapMemoryAllocatorThrowsAssertionErrorOnDoubleFree started
[info] Test org.apache.spark.unsafe.PlatformUtilSuite.freeingOffHeapMemoryBlockResetsOffset started
[info] Test run finished: 0 failed, 0 ignored, 8 total, 0.044s
[info] Test run started
[info] Test org.apache.spark.unsafe.hash.Murmur3_x86_32Suite.testKnownLongInputs started
[info] Test org.apache.spark.unsafe.hash.Murmur3_x86_32Suite.testKnownIntegerInputs started
[info] Test org.apache.spark.unsafe.hash.Murmur3_x86_32Suite.randomizedStressTest started
[info] Test run started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.copyIndexDescendingWithStart started
[info] SparkSinkSuite:
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.refIndexWithStart started
[info] Test run started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.numericIndexDescendingWithStart started
[info] Test org.apache.spark.launcher.SparkSubmitOptionParserSuite.testMissingArg started
[info] Test org.apache.spark.launcher.SparkSubmitOptionParserSuite.testAllOptions started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.copyIndexDescending started
[info] Test org.apache.spark.launcher.SparkSubmitOptionParserSuite.testEqualSeparatedOption started
[info] Test org.apache.spark.launcher.SparkSubmitOptionParserSuite.testExtraOptions started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.childIndexWithStart started
[info] Test run finished: 0 failed, 0 ignored, 4 total, 0.242s
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.childIndexWithLast started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.childIndexWithSkip started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.childIndexWithMax started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.naturalIndexDescending started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.numericIndexDescendingWithLast started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.childIndexDescending started
[info] Test run started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.copyIndexDescendingWithLast started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testCliParser started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.refIndex started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.numericIndexWithLast started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.copyIndexWithStart started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.childIndexDescendingWithStart started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.refIndexWithLast started
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.refIndexWithSkip started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.numericIndexDescending started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.testRefWithIntNaturalKey started
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.refIndexDescending started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.refIndexDescendingWithStart started
[warn] 
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.naturalIndexWithMax started
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2
[warn]     param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] {
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.naturalIndex started
[warn] 
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.copyIndexWithLast started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.copyIndexWithSkip started
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:161: method isRunningLocally in class TaskContext is deprecated: Local execution was removed, so this always returns false
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.copyIndexWithMax started
[warn]   override def isRunningLocally(): Boolean = taskContext.isRunningLocally()
[warn] 
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.childIndexDescendingWithLast started
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/scheduler/StageInfo.scala:59: value attemptId in class StageInfo is deprecated: Use attemptNumber instead
[warn]   def attemptNumber(): Int = attemptId
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.refIndexDescendingWithLast started
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/api/r/RBackend.scala:102: method childGroup in class ServerBootstrap is deprecated: see corresponding Javadoc for more information.
[warn]     if (bootstrap != null && bootstrap.childGroup() != null) {
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.naturalIndexDescendingWithStart started
[warn] 
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.childIndex started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.naturalIndexWithLast started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.naturalIndexWithSkip started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.naturalIndexWithStart started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.copyIndex started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.naturalIndexDescendingWithLast started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.numericIndexWithStart started
[info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.numericIndex started
[info] Test run finished: 0 failed, 0 ignored, 38 total, 0.246s
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testPySparkLauncher started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testAlternateSyntaxParsing started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testExamplesRunner started
[info] Test run started
[info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testDuplicateIndex started
[info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testEmptyIndexName started
[info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testIndexAnnotation started
[info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testNumEncoding started
[info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testIllegalIndexMethod started
[info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testKeyClashes started
[info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testArrayIndices started
[info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testNoNaturalIndex2 started
[info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testIllegalIndexName started
[info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testNoNaturalIndex started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testSparkRShell started
[info] Test run finished: 0 failed, 0 ignored, 10 total, 0.011s
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testMissingAppResource started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testShellCliParser started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testClusterCmdBuilder started
[info] Test run started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.copyIndexDescendingWithStart started
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testDriverCmdBuilder started
[info] Test org.apache.spark.unsafe.hash.Murmur3_x86_32Suite.testKnownBytesInputs started
[info] Test org.apache.spark.unsafe.hash.Murmur3_x86_32Suite.randomizedStressTestPaddedStrings started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testExamplesRunnerNoMainClass started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testCliKillAndStatus started
[info] Test org.apache.spark.unsafe.hash.Murmur3_x86_32Suite.randomizedStressTestBytes started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testExamplesRunnerNoArg started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testPySparkFallback started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testExamplesRunnerWithMasterNoMainClass started
[info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testCliHelpAndNoArg started
[info] Test run finished: 0 failed, 0 ignored, 15 total, 1.061s
[info] Test run started
[info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testNoRedirectToLog started
[info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testProcMonitorWithOutputRedirection started
[info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testRedirectOutputToLog started
[info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testRedirectsSimple started
[info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testRedirectErrorToLog started
[info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testProcMonitorWithLogRedirection started
[info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testFailedChildProc started
[info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testRedirectErrorTwiceFails started
[info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testBadLogRedirect started
[info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testRedirectLastWins started
[info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testRedirectToLog started
[info] Test run finished: 0 failed, 0 ignored, 11 total, 0.132s
[info] Test run started
[info] Test org.apache.spark.launcher.CommandBuilderUtilsSuite.testValidOptionStrings started
[info] Test org.apache.spark.launcher.CommandBuilderUtilsSuite.testJavaMajorVersion started
[info] Test org.apache.spark.launcher.CommandBuilderUtilsSuite.testPythonArgQuoting started
[info] Test org.apache.spark.launcher.CommandBuilderUtilsSuite.testWindowsBatchQuoting started
[info] Test org.apache.spark.launcher.CommandBuilderUtilsSuite.testInvalidOptionStrings started
[info] Test run finished: 0 failed, 0 ignored, 5 total, 0.002s
[info] Test run started
[info] Test org.apache.spark.launcher.LauncherServerSuite.testTimeout started
[info] Test org.apache.spark.launcher.LauncherServerSuite.testStreamFiltering started
[info] Test run finished: 0 failed, 0 ignored, 6 total, 0.343s
[info] Test org.apache.spark.launcher.LauncherServerSuite.testSparkSubmitVmShutsDown started
[info] Test run started
[info] Test org.apache.spark.launcher.LauncherServerSuite.testLauncherServerReuse started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.titleCase started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.concatTest started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.soundex started
[info] Test org.apache.spark.launcher.LauncherServerSuite.testAppHandleDisconnect started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.basicTest started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.writeToOutputStreamUnderflow started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.testToShort started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.startsWith started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.compareTo started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.levenshteinDistance started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.writeToOutputStreamOverflow started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.writeToOutputStreamIntArray started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.upperAndLower started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.testToInt started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.createBlankString started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.prefix started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.concatWsTest started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.repeat started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.contains started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.skipWrongFirstByte started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.emptyStringTest started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.writeToOutputStreamSlice started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.trimBothWithTrimString started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.substringSQL started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.substring_index started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.pad started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.split started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.trims started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.trimRightWithTrimString started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.findInSet started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.substring started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.translate started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.reverse started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.trimLeftWithTrimString started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.endsWith started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.testToByte started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.testToLong started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.writeToOutputStream started
[info] Test org.apache.spark.unsafe.types.UTF8StringSuite.indexOf started
[info] Test run finished: 0 failed, 0 ignored, 38 total, 0.043s
[info] Test org.apache.spark.launcher.LauncherServerSuite.testCommunication started
[info] Test run finished: 0 failed, 0 ignored, 6 total, 0.14s
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/spark-core_2.11-2.4.6-SNAPSHOT.jar ...
[info] - Success with ack (2 seconds, 397 milliseconds)
[info] - Failure with nack (1 second, 67 milliseconds)
[info] - Failure with timeout (1 second, 77 milliseconds)
[info] Done packaging.
[info] Test run started
[info] Test org.apache.spark.network.crypto.AuthEngineSuite.testBadChallenge started
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] BitArraySuite:
[info] Test org.apache.spark.network.crypto.AuthEngineSuite.testWrongAppId started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.refIndexWithStart started
[info] - error case when create BitArray (17 milliseconds)
[info] - bitSize (3 milliseconds)
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.numericIndexDescendingWithStart started
[info] - set (1 millisecond)
[info] Test org.apache.spark.network.crypto.AuthEngineSuite.testWrongNonce started
[info] - normal operation (8 milliseconds)
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.copyIndexDescending started
[info] Test org.apache.spark.network.crypto.AuthEngineSuite.testMismatchedSecret started
[info] - merge (7 milliseconds)
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.childIndexWithStart started
[info] Test org.apache.spark.network.crypto.AuthEngineSuite.testEncryptedMessage started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.childIndexWithLast started
[info] Test org.apache.spark.network.crypto.AuthEngineSuite.testEncryptedMessageWhenTransferringZeroBytes started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.childIndexWithSkip started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.childIndexWithMax started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.naturalIndexDescending started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.numericIndexDescendingWithLast started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.childIndexDescending started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.copyIndexDescendingWithLast started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.refIndex started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.numericIndexWithLast started
[info] BloomFilterSuite:
[info] - accuracy - Byte (10 milliseconds)
[info] - mergeInPlace - Byte (4 milliseconds)
[info] - accuracy - Short (5 milliseconds)
[info] - mergeInPlace - Short (8 milliseconds)
[info] - accuracy - Int (265 milliseconds)
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:87: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       fwInfoBuilder.setRole(role)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:224: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]     role.foreach { r => builder.setRole(r) }
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:260: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]             Option(r.getRole), reservation)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:263: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]             Option(r.getRole), reservation)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:521: method setRole in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       role.foreach { r => builder.setRole(r) }
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:537: method getRole in class Resource is deprecated: see corresponding Javadoc for more information.
[warn]         (RoleResourceInfo(resource.getRole, reservation),
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.copyIndexWithStart started
[info] Test org.apache.spark.network.crypto.AuthEngineSuite.testAuthEngine started
[info] Test org.apache.spark.network.crypto.AuthEngineSuite.testBadKeySize started
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.childIndexDescendingWithStart started
[info] Test run finished: 0 failed, 0 ignored, 8 total, 2.377s
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.refIndexWithLast started
[info] Test run started
[info] Test org.apache.spark.network.server.OneForOneStreamManagerSuite.streamStatesAreFreedWhenConnectionIsClosedEvenIfBufferIteratorThrowsException started
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:259: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val dstream = FlumeUtils.createStream(jssc, hostname, port, storageLevel, enableDecompression)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:275: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val dstream = FlumeUtils.createPollingStream(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createPollingStream(ssc, host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createPollingStream(ssc, host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumeEventCount.scala:59: object FlumeUtils in package flume is deprecated: Deprecated without replacement
[warn]     val stream = FlumeUtils.createStream(ssc, host, port, StorageLevel.MEMORY_ONLY_SER_2)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/DirectKafkaInputDStream.scala:172: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]     val msgs = c.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:100: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:153: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaDataConsumer.scala:200: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     val p = consumer.poll(timeout)
[warn] 
[info] Test org.apache.spark.network.server.OneForOneStreamManagerSuite.managedBuffersAreFeedWhenConnectionIsClosed started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.refIndexWithSkip started
[info] Test run finished: 0 failed, 0 ignored, 2 total, 0.085s
[info] Test run started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.numericIndexDescending started
[info] Test org.apache.spark.network.RequestTimeoutIntegrationSuite.furtherRequestsDelay started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.testRefWithIntNaturalKey started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.refIndexDescending started
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:633: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder](
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:647: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:648: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker]): JavaRDD[(Array[Byte], Array[Byte])] = {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:657: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:658: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker]): JavaRDD[Array[Byte]] = {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:670: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges: JList[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:671: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       leaders: JMap[TopicAndPartition, Broker],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:673: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createRDD[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V](
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:676: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       offsetRanges.toArray(new Array[OffsetRange](offsetRanges.size())),
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:720: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val kc = new KafkaCluster(Map(kafkaParams.asScala.toSeq: _*))
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:721: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       KafkaUtils.getFromOffsets(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:725: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     KafkaUtils.createDirectStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V](
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def createBroker(host: String, port: JInt): Broker = Broker(host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: object Broker in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def createBroker(host: String, port: JInt): Broker = Broker(host, port)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:740: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   def offsetRangesOfKafkaRDD(rdd: RDD[_]): JList[OffsetRange] = {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:89: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   protected val kc = new KafkaCluster(kafkaParams)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:172: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       OffsetRange(tp.topic, tp.partition, fo, uo.offset)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:217: object KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]       val leaders = KafkaCluster.checkErrors(kc.findLeaders(topics))
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:222: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]            context.sparkContext, kafkaParams, b.map(OffsetRange(_)), leaders, messageHandler)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:58: trait HasOffsetRanges in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]   ) extends RDD[R](sc, Nil) with Logging with HasOffsetRanges {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val offsetRanges: Array[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val offsetRanges: Array[OffsetRange],
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:152: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]     val kc = new KafkaCluster(kafkaParams)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:268: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration
[warn]         OffsetRange(tp.topic, tp.partition, fo, uo.offset)
[warn] 
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.refIndexDescendingWithStart started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.naturalIndexWithMax started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.naturalIndex started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.copyIndexWithLast started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.copyIndexWithSkip started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.copyIndexWithMax started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.childIndexDescendingWithLast started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.refIndexDescendingWithLast started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.naturalIndexDescendingWithStart started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.childIndex started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.naturalIndexWithLast started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.naturalIndexWithSkip started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.naturalIndexWithStart started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.copyIndex started
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:140: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]   private val client = new AmazonKinesisClient(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:150: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]   client.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:219: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information.
[warn]     getRecordsRequest.setRequestCredentials(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:240: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information.
[warn]     getShardIteratorRequest.setRequestCredentials(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:606: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]       KinesisUtils.createStream(jssc.ssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:613: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]         KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:616: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead
[warn]         KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName,
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:107: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val kinesisClient = new AmazonKinesisClient(credentials)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:108: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     kinesisClient.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:223: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val kinesisClient = new AmazonKinesisClient(new DefaultAWSCredentialsProviderChain())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:224: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     kinesisClient.setEndpoint(endpoint)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/SparkAWSCredentials.scala:76: method withLongLivedCredentialsProvider in class Builder is deprecated: see corresponding Javadoc for more information.
[warn]       .withLongLivedCredentialsProvider(longLivedCreds.provider)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:58: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information.
[warn]     val client = new AmazonKinesisClient(KinesisTestUtils.getAWSCredentials())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:59: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     client.setEndpoint(endpointUrl)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:64: constructor AmazonDynamoDBClient in class AmazonDynamoDBClient is deprecated: see corresponding Javadoc for more information.
[warn]     val dynamoDBClient = new AmazonDynamoDBClient(new DefaultAWSCredentialsProviderChain())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:65: method setRegion in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information.
[warn]     dynamoDBClient.setRegion(RegionUtils.getRegion(regionName))
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisReceiver.scala:187: constructor Worker in class Worker is deprecated: see corresponding Javadoc for more information.
[warn]     worker = new Worker(recordProcessorFactory, kinesisClientLibConfiguration)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.naturalIndexDescendingWithLast started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.numericIndexWithStart started
[info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.numericIndex started
[info] - mergeInPlace - Int (3 seconds, 31 milliseconds)
[info] Test run finished: 0 failed, 0 ignored, 38 total, 9.585s
[info] Test run started
[info] Test org.apache.spark.util.kvstore.ArrayWrappersSuite.testGenericArrayKey started
[info] ScalaTest
[info] Run completed in 30 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.002s
[info] Test run started
[info] Test org.apache.spark.util.kvstore.LevelDBBenchmark ignored
[info] Test run finished: 0 failed, 1 ignored, 0 total, 0.001s
[info] Test run started
[info] Test org.apache.spark.util.kvstore.InMemoryStoreSuite.testBasicIteration started
[info] Test org.apache.spark.util.kvstore.InMemoryStoreSuite.testObjectWriteReadDelete started
[info] Test org.apache.spark.util.kvstore.InMemoryStoreSuite.testMultipleObjectWriteReadDelete started
[info] Test org.apache.spark.util.kvstore.InMemoryStoreSuite.testMetadata started
[info] Test org.apache.spark.util.kvstore.InMemoryStoreSuite.testArrayIndices started
[info] Test org.apache.spark.util.kvstore.InMemoryStoreSuite.testUpdate started
[info] Test org.apache.spark.util.kvstore.InMemoryStoreSuite.testRemoveAll started
[info] Test run finished: 0 failed, 0 ignored, 7 total, 0.03s
[info] Test run started
[info] Test org.apache.spark.util.kvstore.LevelDBSuite.testMultipleTypesWriteReadDelete started
[info] ScalaTest
[info] Run completed in 22 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] - accuracy - Long (602 milliseconds)
[info] - mergeInPlace - Long (121 milliseconds)
[info] ScalaTest
[info] ScalaTest
[info] Run completed in 643 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] Run completed in 50 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] Test org.apache.spark.util.kvstore.LevelDBSuite.testObjectWriteReadDelete started
[info] Test org.apache.spark.util.kvstore.LevelDBSuite.testSkip started
[info] - Multiple consumers (8 seconds, 344 milliseconds)
[info] Test org.apache.spark.util.kvstore.LevelDBSuite.testMultipleObjectWriteReadDelete started
[info] - Multiple consumers with some failures (1 second, 514 milliseconds)
[info] Test org.apache.spark.util.kvstore.LevelDBSuite.testReopenAndVersionCheckDb started
[info] Test org.apache.spark.util.kvstore.LevelDBSuite.testMetadata started
[info] Test run started
[info] - accuracy - String (6 seconds, 66 milliseconds)
[info] Test org.apache.spark.util.kvstore.LevelDBSuite.testUpdate started
[info] Test org.apache.spark.util.kvstore.LevelDBSuite.testRemoveAll started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleIntegrationSuite.testFetchUnregisteredExecutor started
[info] Test org.apache.spark.util.kvstore.LevelDBSuite.testNegativeIndexValues started
[info] Test run finished: 0 failed, 0 ignored, 9 total, 8.553s
[info] Test org.apache.spark.network.shuffle.ExternalShuffleIntegrationSuite.testFetchWrongExecutor started
[info] TestingUtilsSuite:
[info] - Comparing doubles using relative error. (27 milliseconds)
[info] - mergeInPlace - String (4 seconds, 248 milliseconds)
[info] - Comparing doubles using absolute error. (3 milliseconds)
[info] - incompatible merge (1 millisecond)
[info] CountMinSketchSuite:
[info] - Comparing vectors using relative error. (19 milliseconds)
[info] - Comparing vectors using absolute error. (5 milliseconds)
[info] - accuracy - Byte (574 milliseconds)
[info] - mergeInPlace - Byte (236 milliseconds)
[info] - Comparing Matrices using absolute error. (808 milliseconds)
[info] - Comparing Matrices using relative error. (7 milliseconds)
[info] UtilsSuite:
[info] - EPSILON (4 milliseconds)
[info] MatricesSuite:
[info] - dense matrix construction (1 millisecond)
[info] - dense matrix construction with wrong dimension (2 milliseconds)
[info] Test org.apache.spark.network.shuffle.ExternalShuffleIntegrationSuite.testFetchNoServer started
[info] - accuracy - Short (1 second, 269 milliseconds)
[info] - sparse matrix construction (1 second, 220 milliseconds)
[info] - sparse matrix construction with wrong number of elements (1 millisecond)
[info] - index in matrices incorrect input (3 milliseconds)
[info] - equals (16 milliseconds)
[info] - matrix copies are deep copies (1 millisecond)
[info] - matrix indexing and updating (2 milliseconds)
[info] - dense to dense (21 milliseconds)
[info] - dense to sparse (4 milliseconds)
[info] - sparse to sparse (10 milliseconds)
[info] - sparse to dense (5 milliseconds)
[info] - compressed dense (7 milliseconds)
[info] - compressed sparse (3 milliseconds)
[info] - map, update (4 milliseconds)
[info] - transpose (2 milliseconds)
[info] - foreachActive (5 milliseconds)
[info] - horzcat, vertcat, eye, speye (21 milliseconds)
[info] - zeros (2 milliseconds)
[info] - ones (4 milliseconds)
[info] - eye (2 milliseconds)
[info] - mergeInPlace - Short (286 milliseconds)
[info] - rand (388 milliseconds)
[info] - randn (2 milliseconds)
[info] - diag (0 milliseconds)
[info] - sprand (10 milliseconds)
[info] - sprandn (3 milliseconds)
[info] - toString (111 milliseconds)
[info] - numNonzeros and numActives (2 milliseconds)
[info] - fromBreeze with sparse matrix (26 milliseconds)
Feb 19, 2020 1:09:52 AM com.github.fommil.netlib.BLAS <clinit>
WARNING: Failed to load implementation from: com.github.fommil.netlib.NativeSystemBLAS
Feb 19, 2020 1:09:52 AM com.github.fommil.netlib.BLAS <clinit>
WARNING: Failed to load implementation from: com.github.fommil.netlib.NativeRefBLAS
[info] - row/col iterator (569 milliseconds)
[info] BreezeMatrixConversionSuite:
[info] - dense matrix to breeze (1 millisecond)
[info] - dense breeze matrix to matrix (5 milliseconds)
[info] - sparse matrix to breeze (2 milliseconds)
[info] - sparse breeze matrix to sparse matrix (5 milliseconds)
[info] MultivariateGaussianSuite:
[info] - accuracy - Int (1 second, 669 milliseconds)
Feb 19, 2020 1:09:53 AM com.github.fommil.netlib.LAPACK <clinit>
WARNING: Failed to load implementation from: com.github.fommil.netlib.NativeSystemLAPACK
Feb 19, 2020 1:09:53 AM com.github.fommil.netlib.LAPACK <clinit>
WARNING: Failed to load implementation from: com.github.fommil.netlib.NativeRefLAPACK
[info] - univariate (730 milliseconds)
[info] - multivariate (13 milliseconds)
[info] - multivariate degenerate (1 millisecond)
[info] - SPARK-11302 (8 milliseconds)
[info] BreezeVectorConversionSuite:
[info] - dense to breeze (1 millisecond)
[info] - mergeInPlace - Int (317 milliseconds)
[info] - sparse to breeze (184 milliseconds)
[info] - dense breeze to vector (1 millisecond)
[info] - sparse breeze to vector (2 milliseconds)
[info] - sparse breeze with partially-used arrays to vector (1 millisecond)
[info] VectorsSuite:
[info] - dense vector construction with varargs (1 millisecond)
[info] - dense vector construction from a double array (0 milliseconds)
[info] - sparse vector construction (1 millisecond)
[info] - sparse vector construction with unordered elements (3 milliseconds)
[info] - sparse vector construction with mismatched indices/values array (2 milliseconds)
[info] - sparse vector construction with too many indices vs size (2 milliseconds)
[info] - sparse vector construction with negative indices (1 millisecond)
[info] - dense to array (1 millisecond)
[info] - dense argmax (1 millisecond)
[info] - sparse to array (1 millisecond)
[info] - sparse argmax (1 millisecond)
[info] - vector equals (4 milliseconds)
[info] - vectors equals with explicit 0 (3 milliseconds)
[info] - indexing dense vectors (0 milliseconds)
[info] - indexing sparse vectors (1 millisecond)
[info] - zeros (1 millisecond)
[info] - Vector.copy (1 millisecond)
[info] - fromBreeze (4 milliseconds)
[info] - sqdist (66 milliseconds)
[info] - foreachActive (23 milliseconds)
[info] - vector p-norm (20 milliseconds)
[info] - Vector numActive and numNonzeros (4 milliseconds)
[info] - Vector toSparse and toDense (3 milliseconds)
[info] - Vector.compressed (1 millisecond)
[info] - SparseVector.slice (2 milliseconds)
[info] - sparse vector only support non-negative length (2 milliseconds)
[info] BLASSuite:
[info] - copy (7 milliseconds)
[info] - scal (3 milliseconds)
[info] - axpy (3 milliseconds)
[info] - dot (16 milliseconds)
[info] - spr (17 milliseconds)
[info] - syr (6 milliseconds)
[info] - gemm (10 milliseconds)
[info] - gemv (7 milliseconds)
[info] - spmv (2 milliseconds)
[info] Test org.apache.spark.network.shuffle.ExternalShuffleIntegrationSuite.testRegisterInvalidExecutor started
[info] Test org.apache.spark.network.RequestTimeoutIntegrationSuite.timeoutCleanlyClosesClient started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleIntegrationSuite.testFetchThreeSort started
[info] ScalaTest
[info] Run completed in 27 seconds, 508 milliseconds.
[info] Total number of tests run: 19
[info] Suites: completed 1, aborted 0
[info] Tests: succeeded 19, failed 0, canceled 0, ignored 1, pending 0
[info] All tests passed.
[info] Passed: Total 81, Failed 0, Errors 0, Passed 81, Ignored 1
[info] Test org.apache.spark.network.shuffle.ExternalShuffleIntegrationSuite.testFetchWrongBlockId started
[info] ScalaTest
[info] Run completed in 27 seconds, 764 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] Passed: Total 44, Failed 0, Errors 0, Passed 44
[info] - accuracy - Long (763 milliseconds)
[info] Test org.apache.spark.network.shuffle.ExternalShuffleIntegrationSuite.testFetchNonexistent started
[info] - mergeInPlace - Long (220 milliseconds)
[info] Test org.apache.spark.network.shuffle.ExternalShuffleIntegrationSuite.testFetchOneSort started
[info] Test run finished: 0 failed, 0 ignored, 8 total, 10.506s
[info] Test run started
[info] Test org.apache.spark.network.shuffle.RetryingBlockFetcherSuite.testRetryAndUnrecoverable started
[info] Test org.apache.spark.network.shuffle.RetryingBlockFetcherSuite.testSingleIOExceptionOnFirst started
[info] Test org.apache.spark.network.shuffle.RetryingBlockFetcherSuite.testUnrecoverableFailure started
[info] Test org.apache.spark.network.shuffle.RetryingBlockFetcherSuite.testSingleIOExceptionOnSecond started
[info] Test org.apache.spark.network.shuffle.RetryingBlockFetcherSuite.testThreeIOExceptions started
[info] Test org.apache.spark.network.shuffle.RetryingBlockFetcherSuite.testNoFailures started
[info] Test org.apache.spark.network.shuffle.RetryingBlockFetcherSuite.testTwoIOExceptions started
[info] Test run finished: 0 failed, 0 ignored, 7 total, 0.4s
[info] Test run started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleSecuritySuite.testBadSecret started
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:128: value ENABLE_JOB_SUMMARY in object ParquetOutputFormat is deprecated: see corresponding Javadoc for more information.
[warn]       && conf.get(ParquetOutputFormat.ENABLE_JOB_SUMMARY) == null) {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:360: class ParquetInputSplit in package hadoop is deprecated: see corresponding Javadoc for more information.
[warn]         new org.apache.parquet.hadoop.ParquetInputSplit(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:371: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]         ParquetFileReader.readFooter(sharedConf, filePath, SKIP_ROW_GROUPS).getFileMetaData
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:544: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information.
[warn]           ParquetFileReader.readFooter(
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs)
[warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock())
[warn] 
[info] - accuracy - String (1 second, 812 milliseconds)
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:119: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]     consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:139: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]         consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:187: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]       consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:217: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]       consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:293: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information.
[warn]           consumer.poll(0)
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaDataConsumer.scala:470: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information.
[warn]     val p = consumer.poll(pollTimeoutMs)
[warn] 
[info] Test org.apache.spark.network.shuffle.ExternalShuffleSecuritySuite.testBadAppId started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleSecuritySuite.testValid started
[info] JdbcRDDSuite:
[info] Test org.apache.spark.network.shuffle.ExternalShuffleSecuritySuite.testEncryption started
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:138: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn] object OneHotEncoder extends DefaultParamsReadable[OneHotEncoder] {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:141: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0.
[warn]   override def load(path: String): OneHotEncoder = super.load(path)
[warn] 
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Test run finished: 0 failed, 0 ignored, 4 total, 3.56s
[info] Test run started
[info] Test org.apache.spark.network.sasl.SaslIntegrationSuite.testGoodClient started
[info] Test org.apache.spark.network.sasl.SaslIntegrationSuite.testNoSaslClient started
[info] Test org.apache.spark.network.sasl.SaslIntegrationSuite.testNoSaslServer started
[info] Test org.apache.spark.network.sasl.SaslIntegrationSuite.testBadClient started
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:53: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path
[warn]       if (addedClasspath != "") {
[warn] 
[warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:54: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path
[warn]         settings.classpath append addedClasspath
[warn] 
[info] Test run finished: 0 failed, 0 ignored, 4 total, 0.203s
[info] Test run started
[info] Test org.apache.spark.network.sasl.ShuffleSecretManagerSuite.testMultipleRegisters started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.001s
[info] Test run started
[info] Test org.apache.spark.network.shuffle.NonShuffleFilesCleanupSuite.cleanupUsesExecutorWithShuffleFiles started
[info] Test org.apache.spark.network.shuffle.NonShuffleFilesCleanupSuite.cleanupOnlyRegisteredExecutorWithShuffleFiles started
[info] Test org.apache.spark.network.shuffle.NonShuffleFilesCleanupSuite.cleanupOnRemovedExecutorWithShuffleFiles started
[info] Test org.apache.spark.network.shuffle.NonShuffleFilesCleanupSuite.cleanupOnlyRemovedExecutorWithoutShuffleFiles started
[info] Test org.apache.spark.network.shuffle.NonShuffleFilesCleanupSuite.cleanupOnRemovedExecutorWithoutShuffleFiles started
[info] Test org.apache.spark.network.shuffle.NonShuffleFilesCleanupSuite.cleanupOnlyRemovedExecutorWithShuffleFiles started
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Test org.apache.spark.network.shuffle.NonShuffleFilesCleanupSuite.cleanupOnlyRegisteredExecutorWithoutShuffleFiles started
[info] Test org.apache.spark.network.shuffle.NonShuffleFilesCleanupSuite.cleanupUsesExecutorWithoutShuffleFiles started
[info] Test run finished: 0 failed, 0 ignored, 8 total, 0.216s
[info] - mergeInPlace - String (3 seconds, 531 milliseconds)
[info] Test run started
[info] Test org.apache.spark.network.shuffle.AppIsolationSuite.testSaslAppIsolation started
[info] Test org.apache.spark.network.shuffle.AppIsolationSuite.testAuthEngineAppIsolation started
[info] Test run finished: 0 failed, 0 ignored, 2 total, 1.187s
[info] Test run started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleBlockResolverSuite.testNormalizeAndInternPathname started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleBlockResolverSuite.testSortShuffleBlocks started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleBlockResolverSuite.testBadRequests started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleBlockResolverSuite.jsonSerializationOfExecutorRegistration started
[info] Test run finished: 0 failed, 0 ignored, 4 total, 0.197s
[info] Test run started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleBlockHandlerSuite.testOpenShuffleBlocks started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleBlockHandlerSuite.testRegisterExecutor started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleBlockHandlerSuite.testBadMessages started
[info] Test run finished: 0 failed, 0 ignored, 3 total, 0.022s
[info] Test run started
[info] Test org.apache.spark.network.shuffle.OneForOneBlockFetcherSuite.testEmptyBlockFetch started
[info] Test org.apache.spark.network.shuffle.OneForOneBlockFetcherSuite.testFailure started
[info] Test org.apache.spark.network.shuffle.OneForOneBlockFetcherSuite.testFailureAndSuccess started
[info] Test org.apache.spark.network.shuffle.OneForOneBlockFetcherSuite.testFetchThree started
[info] Test org.apache.spark.network.shuffle.OneForOneBlockFetcherSuite.testFetchOne started
[info] Test run finished: 0 failed, 0 ignored, 5 total, 0.006s
[info] Test run started
[info] Test org.apache.spark.network.shuffle.BlockTransferMessagesSuite.serializeOpenShuffleBlocks started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.001s
[info] Test run started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleCleanupSuite.cleanupOnlyRemovedApp started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleCleanupSuite.cleanupUsesExecutor started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleCleanupSuite.noCleanupAndCleanup started
[info] Test org.apache.spark.network.shuffle.ExternalShuffleCleanupSuite.cleanupMultipleExecutors started
[warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
[info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/examples/target/scala-2.11/jars/spark-examples_2.11-2.4.6-SNAPSHOT.jar ...
[info] Test run finished: 0 failed, 0 ignored, 4 total, 1.731s
[info] Done packaging.
[info] ScalaTest
[info] Run completed in 21 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] - accuracy - Byte array (6 seconds, 838 milliseconds)
[info] - basic functionality (7 seconds, 282 milliseconds)
[info] Test org.apache.spark.network.RequestTimeoutIntegrationSuite.timeoutInactiveRequests started
[info] DistributedSuite:
[info] - large id overflow (701 milliseconds)
[info] - mergeInPlace - Byte array (5 seconds, 307 milliseconds)
[info] - incompatible merge (2 milliseconds)
[info] SparkUncaughtExceptionHandlerSuite:
[info] FlumePollingStreamSuite:
[info] - SPARK-30310: Test uncaught RuntimeException, exitOnUncaughtException = true (4 seconds, 687 milliseconds)
[info] Test run finished: 0 failed, 0 ignored, 3 total, 43.332s
[info] Test run started
[info] Test org.apache.spark.network.ProtocolSuite.responses started
[info] Test org.apache.spark.network.ProtocolSuite.requests started
[info] Test run finished: 0 failed, 0 ignored, 2 total, 0.042s
[info] Test run started
[info] Test org.apache.spark.network.TransportClientFactorySuite.reuseClientsUpToConfigVariable started
[info] - SPARK-30310: Test uncaught RuntimeException, exitOnUncaughtException = false (5 seconds, 486 milliseconds)
[info] Test org.apache.spark.network.TransportClientFactorySuite.reuseClientsUpToConfigVariableConcurrent started
[info] Test org.apache.spark.network.TransportClientFactorySuite.closeFactoryBeforeCreateClient started
[info] - SPARK-30310: Test uncaught OutOfMemoryError, exitOnUncaughtException = true (7 seconds, 90 milliseconds)
[info] Test org.apache.spark.network.TransportClientFactorySuite.closeBlockClientsWithFactory started
[info] Test org.apache.spark.network.TransportClientFactorySuite.neverReturnInactiveClients started
[info] Test org.apache.spark.network.TransportClientFactorySuite.closeIdleConnectionForRequestTimeOut started
[info] Test org.apache.spark.network.TransportClientFactorySuite.returnDifferentClientsForDifferentServers started
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@6960da1b rejected from java.util.concurrent.ThreadPoolExecutor@275edf64[Shutting down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 0]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at java.util.concurrent.Executors$DelegatedExecutorService.execute(Executors.java:668)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.Promise$class.failure(Promise.scala:104)
	at scala.concurrent.impl.Promise$DefaultPromise.failure(Promise.scala:157)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:194)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:192)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
	at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
[info] - task throws not serializable exception (21 seconds, 273 milliseconds)
[info] Test run finished: 0 failed, 0 ignored, 7 total, 11.508s
[info] - local-cluster format (6 milliseconds)
[info] Test run started
[info] Test org.apache.spark.network.crypto.AuthIntegrationSuite.testSaslClientFallback started
[info] Test org.apache.spark.network.crypto.AuthIntegrationSuite.testSaslServerFallback started
[info] Test org.apache.spark.network.crypto.AuthIntegrationSuite.testAuthReplay started
[info] Test org.apache.spark.network.crypto.AuthIntegrationSuite.testNewAuth started
[info] Test org.apache.spark.network.crypto.AuthIntegrationSuite.testLargeMessageEncryption started
[info] Test org.apache.spark.network.crypto.AuthIntegrationSuite.testAuthFailure started
[info] Test run finished: 0 failed, 0 ignored, 6 total, 0.613s
[info] Test run started
[info] Test org.apache.spark.network.RpcIntegrationSuite.sendRpcWithStreamConcurrently started
[info] Test org.apache.spark.network.RpcIntegrationSuite.sendOneWayMessage started
[info] Test org.apache.spark.network.RpcIntegrationSuite.singleRPC started
[info] Test org.apache.spark.network.RpcIntegrationSuite.throwErrorRPC started
[info] Test org.apache.spark.network.RpcIntegrationSuite.doubleTrouble started
[info] Test org.apache.spark.network.RpcIntegrationSuite.doubleRPC started
[info] Test org.apache.spark.network.RpcIntegrationSuite.returnErrorRPC started
[info] - SPARK-30310: Test uncaught OutOfMemoryError, exitOnUncaughtException = false (3 seconds, 767 milliseconds)
[info] Test org.apache.spark.network.RpcIntegrationSuite.sendRpcWithStreamFailures started
[info] Test org.apache.spark.network.RpcIntegrationSuite.sendRpcWithStreamOneAtATime started
[info] Test org.apache.spark.network.RpcIntegrationSuite.sendSuccessAndFailure started
[info] Test run finished: 0 failed, 0 ignored, 10 total, 1.486s
[info] Test run started
[info] Test org.apache.spark.network.crypto.TransportCipherSuite.testBufferNotLeaksOnInternalError started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.028s
[info] Test run started
[info] Test org.apache.spark.network.TransportResponseHandlerSuite.failOutstandingStreamCallbackOnException started
[info] Test org.apache.spark.network.TransportResponseHandlerSuite.failOutstandingStreamCallbackOnClose started
[info] Test org.apache.spark.network.TransportResponseHandlerSuite.testActiveStreams started
[info] Test org.apache.spark.network.TransportResponseHandlerSuite.handleSuccessfulFetch started
[info] Test org.apache.spark.network.TransportResponseHandlerSuite.handleFailedRPC started
[info] Test org.apache.spark.network.TransportResponseHandlerSuite.handleFailedFetch started
[info] Test org.apache.spark.network.TransportResponseHandlerSuite.handleSuccessfulRPC started
[info] Test org.apache.spark.network.TransportResponseHandlerSuite.clearAllOutstandingRequests started
[info] Test run finished: 0 failed, 0 ignored, 8 total, 0.031s
[info] Test run started
[info] Test org.apache.spark.network.util.TransportFrameDecoderSuite.testEmptyFrame started
[info] Test org.apache.spark.network.util.TransportFrameDecoderSuite.testNegativeFrameSize started
[info] Test org.apache.spark.network.util.TransportFrameDecoderSuite.testSplitLengthField started
[info] Test org.apache.spark.network.util.TransportFrameDecoderSuite.testFrameDecoding started
[info] Test org.apache.spark.network.util.TransportFrameDecoderSuite.testInterception started
[info] Test org.apache.spark.network.util.TransportFrameDecoderSuite.testRetainedFrames started
[info] Test run finished: 0 failed, 0 ignored, 6 total, 0.103s
[info] Test run started
[info] Test org.apache.spark.network.protocol.MessageWithHeaderSuite.testDeallocateReleasesManagedBuffer started
[info] Test org.apache.spark.network.protocol.MessageWithHeaderSuite.testByteBufBody started
[info] Test org.apache.spark.network.protocol.MessageWithHeaderSuite.testShortWrite started
[info] Test org.apache.spark.network.protocol.MessageWithHeaderSuite.testCompositeByteBufBodySingleBuffer started
[info] Test org.apache.spark.network.protocol.MessageWithHeaderSuite.testCompositeByteBufBodyMultipleBuffers started
[info] Test org.apache.spark.network.protocol.MessageWithHeaderSuite.testSingleWrite started
[info] Test run finished: 0 failed, 0 ignored, 6 total, 0.005s
[info] Test run started
[info] Test org.apache.spark.network.sasl.SparkSaslSuite.testNonMatching started
[info] Test org.apache.spark.network.sasl.SparkSaslSuite.testSaslAuthentication started
[info] Test org.apache.spark.network.sasl.SparkSaslSuite.testEncryptedMessageChunking started
[info] Test org.apache.spark.network.sasl.SparkSaslSuite.testServerAlwaysEncrypt started
[info] Test org.apache.spark.network.sasl.SparkSaslSuite.testDataEncryptionIsActuallyEnabled started
[info] Test org.apache.spark.network.sasl.SparkSaslSuite.testFileRegionEncryption started
[info] Test org.apache.spark.network.sasl.SparkSaslSuite.testSaslEncryption started
[info] - SPARK-30310: Test uncaught SparkFatalException(RuntimeException), exitOnUncaughtException = true (2 seconds, 696 milliseconds)
[info] Test org.apache.spark.network.sasl.SparkSaslSuite.testDelegates started
[info] Test org.apache.spark.network.sasl.SparkSaslSuite.testEncryptedMessage started
[info] Test org.apache.spark.network.sasl.SparkSaslSuite.testMatching started
[info] Test org.apache.spark.network.sasl.SparkSaslSuite.testRpcHandlerDelegate started
[info] Test run finished: 0 failed, 0 ignored, 11 total, 2.211s
[info] Test run started
[info] Test org.apache.spark.network.util.CryptoUtilsSuite.testConfConversion started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.001s
[info] Test run started
[info] Test org.apache.spark.network.TransportRequestHandlerSuite.handleFetchRequestAndStreamRequest started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.016s
[info] Test run started
[info] Test org.apache.spark.network.crypto.AuthMessagesSuite.testServerResponse started
[info] Test org.apache.spark.network.crypto.AuthMessagesSuite.testClientChallenge started
[info] Test run finished: 0 failed, 0 ignored, 2 total, 0.001s
[info] Test run started
[info] Test org.apache.spark.network.util.NettyMemoryMetricsSuite.testGeneralNettyMemoryMetrics started
[info] Test org.apache.spark.network.util.NettyMemoryMetricsSuite.testAdditionalMetrics started
[info] Test run finished: 0 failed, 0 ignored, 2 total, 0.106s
[info] Test run started
[info] Test org.apache.spark.network.ChunkFetchIntegrationSuite.fetchNonExistentChunk started
[info] - flume polling test (17 seconds, 619 milliseconds)
[info] Test org.apache.spark.network.ChunkFetchIntegrationSuite.fetchFileChunk started
[info] Test org.apache.spark.network.ChunkFetchIntegrationSuite.fetchBothChunks started
[info] Test org.apache.spark.network.ChunkFetchIntegrationSuite.fetchChunkAndNonExistent started
[info] Test org.apache.spark.network.ChunkFetchIntegrationSuite.fetchBufferChunk started
[info] Test run finished: 0 failed, 0 ignored, 5 total, 4.402s
[info] Test run started
[info] Test org.apache.spark.network.StreamSuite.testSingleStream started
[info] Test org.apache.spark.network.StreamSuite.testMultipleStreams started
[info] - SPARK-30310: Test uncaught SparkFatalException(RuntimeException), exitOnUncaughtException = false (5 seconds, 934 milliseconds)
[info] Test org.apache.spark.network.StreamSuite.testConcurrentStreams started
[info] Test org.apache.spark.network.StreamSuite.testZeroLengthStream started
[info] Test run finished: 0 failed, 0 ignored, 4 total, 2.636s
[info] - SPARK-30310: Test uncaught SparkFatalException(OutOfMemoryError), exitOnUncaughtException = true (2 seconds, 790 milliseconds)
[info] - simple groupByKey (13 seconds, 476 milliseconds)
[info] ReliableKafkaStreamSuite:
[info] - flume polling test multiple hosts (14 seconds, 853 milliseconds)
[info] - SPARK-30310: Test uncaught SparkFatalException(OutOfMemoryError), exitOnUncaughtException = false (8 seconds, 785 milliseconds)
[info] PipedRDDSuite:
[info] FlumeStreamSuite:
[info] - basic pipe (2 seconds, 182 milliseconds)
[info] - basic pipe with tokenization (1 second, 164 milliseconds)
[info] - failure in iterating over pipe input (758 milliseconds)
[info] - stdin writer thread should be exited when task is finished (849 milliseconds)
[info] - advanced pipe (3 seconds, 691 milliseconds)
[info] - pipe with empty partition (102 milliseconds)
[info] - pipe with env variable (42 milliseconds)
[info] - pipe with process which cannot be launched due to bad command (50 milliseconds)
cat: nonexistent_file: No such file or directory
cat: nonexistent_file: No such file or directory
[info] - pipe with process which is launched but fails with non-zero exit status (62 milliseconds)
[info] - basic pipe with separate working directory (150 milliseconds)
[info] - test pipe exports map_input_file (222 milliseconds)
[info] - flume input stream (10 seconds, 762 milliseconds)
[info] - test pipe exports mapreduce_map_input_file (61 milliseconds)
[info] AccumulatorV2Suite:
[info] - LongAccumulator add/avg/sum/count/isZero (1 millisecond)
[info] - DoubleAccumulator add/avg/sum/count/isZero (1 millisecond)
[info] - ListAccumulator (1 millisecond)
[info] - LegacyAccumulatorWrapper (3 milliseconds)
[info] - LegacyAccumulatorWrapper with AccumulatorParam that has no equals/hashCode (3 milliseconds)
[info] FileSuite:
[info] - text files (1 second, 795 milliseconds)
[info] - groupByKey where map output sizes exceed maxMbInFlight (21 seconds, 523 milliseconds)
[info] - flume input compressed stream (2 seconds, 786 milliseconds)
[info] - text files (compressed) (855 milliseconds)
[info] Test run started
[info] Test org.apache.spark.streaming.flume.JavaFlumeStreamSuite.testFlumeStream started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.313s
[info] Test run started
[info] Test org.apache.spark.streaming.flume.JavaFlumePollingStreamSuite.testFlumeStream started
[info] - SequenceFiles (542 milliseconds)
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.24s
[info] - SequenceFile (compressed) (1 second, 669 milliseconds)
[info] - SequenceFile with writable key (525 milliseconds)
[info] - Reliable Kafka input stream with single topic (6 seconds, 625 milliseconds)
[info] - SequenceFile with writable value (925 milliseconds)
[info] LabelPropagationSuite:
[info] - SequenceFile with writable key and value (1 second, 881 milliseconds)
[info] - implicit conversions in reading SequenceFiles (816 milliseconds)
[info] - object files of ints (305 milliseconds)
[info] - accumulators (7 seconds, 727 milliseconds)
[info] - object files of complex types (1 second, 696 milliseconds)
[info] - object files of classes from a JAR (2 seconds, 449 milliseconds)
[info] - write SequenceFile using new Hadoop API (1 second, 60 milliseconds)
[info] - read SequenceFile using new Hadoop API (749 milliseconds)
[info] - binary file input as byte array (220 milliseconds)
[info] - portabledatastream caching tests (222 milliseconds)
[info] - Reliable Kafka input stream with multiple topics (7 seconds, 709 milliseconds)
[info] - portabledatastream persist disk storage (1 second, 48 milliseconds)
[info] - portabledatastream flatmap tests (255 milliseconds)
[info] KafkaStreamSuite:
[info] - SPARK-22357 test binaryFiles minPartitions (1 second, 623 milliseconds)
[info] - minimum split size per node and per rack should be less than or equal to maxSplitSize (201 milliseconds)
[info] - fixed record length binary file as byte array (128 milliseconds)
[info] - negative binary record length should raise an exception (130 milliseconds)
[info] - file caching (177 milliseconds)
[info] - prevent user from overwriting the empty directory (old Hadoop API) (283 milliseconds)
[info] - prevent user from overwriting the non-empty directory (old Hadoop API) (402 milliseconds)
[info] - allow user to disable the output directory existence checking (old Hadoop API) (1 second, 171 milliseconds)
[info] - prevent user from overwriting the empty directory (new Hadoop API) (118 milliseconds)
[info] - prevent user from overwriting the non-empty directory (new Hadoop API) (337 milliseconds)
[info] - allow user to disable the output directory existence checking (new Hadoop API (330 milliseconds)
[info] - Kafka input stream (3 seconds, 161 milliseconds)
[info] - save Hadoop Dataset through old Hadoop API (257 milliseconds)
[info] - save Hadoop Dataset through new Hadoop API (195 milliseconds)
[info] - Get input files via old Hadoop API (353 milliseconds)
[info] DirectKafkaStreamSuite:
[info] - broadcast variables (13 seconds, 784 milliseconds)
[info] - Get input files via new Hadoop API (1 second, 154 milliseconds)
[info] - spark.files.ignoreCorruptFiles should work both HadoopRDD and NewHadoopRDD (1 second, 81 milliseconds)
[info] - spark.hadoopRDD.ignoreEmptySplits work correctly (old Hadoop API) (1 second, 499 milliseconds)
[info] - spark.hadoopRDD.ignoreEmptySplits work correctly (new Hadoop API) (663 milliseconds)
[info] - spark.files.ignoreMissingFiles should work both HadoopRDD and NewHadoopRDD (920 milliseconds)
[info] LogPageSuite:
[info] - get logs simple (365 milliseconds)
[info] PartiallyUnrolledIteratorSuite:
[info] - join two iterators (37 milliseconds)
[info] HistoryServerDiskManagerSuite:
[info] - leasing space (190 milliseconds)
[info] - tracking active stores (25 milliseconds)
[info] - approximate size heuristic (1 millisecond)
[info] ExternalShuffleServiceSuite:
[info] - groupByKey without compression (669 milliseconds)
[info] - basic stream receiving with multiple topics and smallest starting offset (3 seconds, 896 milliseconds)
[info] - repeatedly failing task (8 seconds, 746 milliseconds)
[info] - receiving from largest starting offset (845 milliseconds)
[info] - creating stream by offset (928 milliseconds)
[info] - Label Propagation (32 seconds, 519 milliseconds)
[info] BytecodeUtilsSuite:
[info] - closure invokes a method (28 milliseconds)
[info] - closure inside a closure invokes a method (3 milliseconds)
[info] - closure inside a closure inside a closure invokes a method (4 milliseconds)
[info] - closure calling a function that invokes a method (2 milliseconds)
[info] - closure calling a function that invokes a method which uses another closure (3 milliseconds)
[info] - nested closure (3 milliseconds)
[info] PregelSuite:
[info] - 1 iteration (2 seconds, 653 milliseconds)
[info] - shuffle non-zero block size (15 seconds, 260 milliseconds)
[info] - chain propagation (8 seconds, 179 milliseconds)
[info] PeriodicGraphCheckpointerSuite:
[info] - Persisting (916 milliseconds)
[info] - offset recovery (18 seconds, 224 milliseconds)
[info] - Direct Kafka stream report input information (1 second, 179 milliseconds)
[info] - maxMessagesPerPartition with backpressure disabled (266 milliseconds)
[info] - maxMessagesPerPartition with no lag (508 milliseconds)
[info] - maxMessagesPerPartition respects max rate (547 milliseconds)
[info] - using rate controller (3 seconds, 588 milliseconds)
[info] - Checkpointing (7 seconds, 644 milliseconds)
[info] ConnectedComponentsSuite:
[info] - repeatedly failing task that crashes JVM (27 seconds, 485 milliseconds)
[info] - use backpressure.initialRate with backpressure (1 second, 922 milliseconds)
[info] - backpressure.initialRate should honor maxRatePerPartition (867 milliseconds)
[info] - maxMessagesPerPartition with zero offset and rate equal to one (494 milliseconds)
[info] - shuffle serializer (14 seconds, 421 milliseconds)
[info] KafkaRDDSuite:
[info] - basic usage (779 milliseconds)
[info] - Grid Connected Components (5 seconds, 980 milliseconds)
[info] - iterator boundary conditions (1 second, 17 milliseconds)
[info] KafkaClusterSuite:
[info] - metadata apis (101 milliseconds)
[info] - leader offset apis (9 milliseconds)
[info] - consumer offset apis (403 milliseconds)
[info] Test run started
[info] Test org.apache.spark.streaming.kafka.JavaKafkaStreamSuite.testKafkaStream started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 7.611s
[info] Test run started
[info] Test org.apache.spark.streaming.kafka.JavaKafkaRDDSuite.testKafkaRDD started
[info] - Reverse Grid Connected Components (13 seconds, 161 milliseconds)
[info] Test run finished: 0 failed, 0 ignored, 1 total, 3.489s
[info] Test run started
[info] Test org.apache.spark.streaming.kafka.JavaDirectKafkaStreamSuite.testKafkaStream started
[info] - zero sized blocks (22 seconds, 459 milliseconds)
[info] Test run finished: 0 failed, 0 ignored, 1 total, 5.814s
[info] MesosSchedulerUtilsSuite:
[info] - use at-least minimum overhead (1 second, 78 milliseconds)
[info] - use overhead if it is greater than minimum value (4 milliseconds)
[info] - use spark.mesos.executor.memoryOverhead (if set) (21 milliseconds)
[info] - parse a non-empty constraint string correctly (26 milliseconds)
[info] - parse an empty constraint string correctly (2 milliseconds)
[info] - Chain Connected Components (13 seconds, 346 milliseconds)
[info] - throw an exception when the input is malformed (6 milliseconds)
[info] - empty values for attributes' constraints matches all values (106 milliseconds)
[info] - subset match is performed for set attributes (9 milliseconds)
[info] - less than equal match is performed on scalar attributes (9 milliseconds)
[info] - contains match is performed for range attributes (49 milliseconds)
[info] - equality match is performed for text attributes (2 milliseconds)
[info] - Port reservation is done correctly with user specified ports only (36 milliseconds)
[info] - Port reservation is done correctly with all random ports (6 milliseconds)
[info] - Port reservation is done correctly with user specified ports only - multiple ranges (4 milliseconds)
[info] - Port reservation is done correctly with all random ports - multiple ranges (4 milliseconds)
[info] - Principal specified via spark.mesos.principal (637 milliseconds)
[info] - Principal specified via spark.mesos.principal.file (143 milliseconds)
[info] - Principal specified via spark.mesos.principal.file that does not exist (5 milliseconds)
[info] - Principal specified via SPARK_MESOS_PRINCIPAL (3 milliseconds)
[info] - Principal specified via SPARK_MESOS_PRINCIPAL_FILE (3 milliseconds)
[info] - Principal specified via SPARK_MESOS_PRINCIPAL_FILE that does not exist (4 milliseconds)
[info] - Secret specified via spark.mesos.secret (3 milliseconds)
[info] - Principal specified via spark.mesos.secret.file (37 milliseconds)
[info] - Principal specified via spark.mesos.secret.file that does not exist (4 milliseconds)
[info] - Principal specified via SPARK_MESOS_SECRET (2 milliseconds)
[info] - Principal specified via SPARK_MESOS_SECRET_FILE (3 milliseconds)
[info] - Secret specified with no principal (3 milliseconds)
[info] - Principal specification preference (2 milliseconds)
[info] - Secret specification preference (3 milliseconds)
[info] MesosSchedulerBackendUtilSuite:
[info] - ContainerInfo fails to parse invalid docker parameters (774 milliseconds)
[info] - ContainerInfo parses docker parameters (2 milliseconds)
[info] - SPARK-28778 ContainerInfo respects Docker network configuration (28 milliseconds)
[info] MesosFineGrainedSchedulerBackendSuite:
[info] - weburi is set in created scheduler driver (139 milliseconds)
[info] - Use configured mesosExecutor.cores for ExecutorInfo (105 milliseconds)
[info] - check spark-class location correctly (18 milliseconds)
[info] - spark docker properties correctly populate the DockerInfo message (26 milliseconds)
[info] - mesos resource offers result in launching tasks (86 milliseconds)
[info] - can handle multiple roles (9 milliseconds)
[info] MesosCoarseGrainedSchedulerBackendSuite:
[info] - repeatedly failing task that crashes JVM with a zero exit code (SPARK-16925) (35 seconds, 246 milliseconds)
[info] - mesos supports killing and limiting executors (6 seconds, 667 milliseconds)
[info] - mesos supports killing and relaunching tasks with executors (2 seconds, 255 milliseconds)
[info] - mesos supports spark.executor.cores (134 milliseconds)
[info] - mesos supports unset spark.executor.cores (2 seconds, 337 milliseconds)
[info] - mesos does not acquire more than spark.cores.max (198 milliseconds)
[info] - mesos does not acquire gpus if not specified (133 milliseconds)
[info] - mesos does not acquire more than spark.mesos.gpus.max (865 milliseconds)
[info] - mesos declines offers that violate attribute constraints (171 milliseconds)
[info] - Reverse Chain Connected Components (16 seconds, 475 milliseconds)
[info] - mesos declines offers with a filter when reached spark.cores.max (608 milliseconds)
[info] - zero sized blocks without kryo (24 seconds, 525 milliseconds)
[info] - mesos declines offers with a filter when maxCores not a multiple of executor.cores (150 milliseconds)
[info] - mesos declines offers with a filter when reached spark.cores.max with executor.cores (133 milliseconds)
[info] - Connected Components on a Toy Connected Graph (1 second, 480 milliseconds)
[info] VertexRDDSuite:
[info] - mesos assigns tasks round-robin on offers (96 milliseconds)
[info] - mesos creates multiple executors on a single slave (132 milliseconds)
[info] - mesos doesn't register twice with the same shuffle service (215 milliseconds)
[info] - filter (562 milliseconds)
[info] - mapValues (1 second, 257 milliseconds)
[info] - minus (542 milliseconds)
[info] - Port offer decline when there is no appropriate range (1 second, 283 milliseconds)
[info] - minus with RDD[(VertexId, VD)] (386 milliseconds)
[info] - Port offer accepted when ephemeral ports are used (1 second, 94 milliseconds)
[info] - minus with non-equal number of partitions (1 second, 549 milliseconds)
[info] - caching (encryption = off) (19 seconds, 100 milliseconds)
[info] - Port offer accepted with user defined port numbers (1 second, 345 milliseconds)
[info] - mesos kills an executor when told (82 milliseconds)
[info] - diff (964 milliseconds)
[info] - weburi is set in created scheduler driver (85 milliseconds)
[info] - failover timeout is set in created scheduler driver (93 milliseconds)
[info] - honors unset spark.mesos.containerizer (399 milliseconds)
[info] - honors spark.mesos.containerizer="mesos" (117 milliseconds)
[info] - diff with RDD[(VertexId, VD)] (895 milliseconds)
[info] - docker settings are reflected in created tasks (188 milliseconds)
[info] - force-pull-image option is disabled by default (265 milliseconds)
[info] - mesos supports spark.executor.uri (107 milliseconds)
[info] - diff vertices with non-equal number of partitions (828 milliseconds)
[info] - mesos supports setting fetcher cache (91 milliseconds)
[info] - leftJoin (1 second, 239 milliseconds)
[info] - mesos supports disabling fetcher cache (1 second, 269 milliseconds)
[info] - mesos sets task name to spark.app.name (686 milliseconds)
[info] - mesos sets configurable labels on tasks (107 milliseconds)
[info] - mesos supports spark.mesos.network.name and spark.mesos.network.labels (74 milliseconds)
[info] - leftJoin vertices with non-equal number of partitions (1 second, 367 milliseconds)
[info] - innerJoin (1 second, 627 milliseconds)
[info] - innerJoin vertices with the non-equal number of partitions (1 second, 283 milliseconds)
[info] - SPARK-28778 '--hostname' shouldn't be set for executor when virtual network is enabled (3 seconds, 184 milliseconds)
[info] - supports spark.scheduler.minRegisteredResourcesRatio (634 milliseconds)
[info] - aggregateUsingIndex (932 milliseconds)
[info] - mergeFunc (400 milliseconds)
[info] - cache, getStorageLevel (194 milliseconds)
[info] - checkpoint (1 second, 538 milliseconds)
[info] - shuffle on mutable pairs (15 seconds, 937 milliseconds)
[info] - count (1 second, 210 milliseconds)
[info] EdgePartitionSuite:
[info] - reverse (7 milliseconds)
[info] - map (2 milliseconds)
[info] - filter (4 milliseconds)
[info] - groupEdges (2 milliseconds)
[info] - innerJoin (2 milliseconds)
[info] - isActive, numActives, replaceActives (1 millisecond)
[info] - tripletIterator (1 millisecond)
[info] - serialization (19 milliseconds)
[info] EdgeSuite:
[info] - compare (1 millisecond)
[info] PageRankSuite:
[info] - caching (encryption = on) (14 seconds, 546 milliseconds)
[info] - supports data locality with dynamic allocation (7 seconds, 540 milliseconds)
[info] - Creates an env-based reference secrets. (569 milliseconds)
[info] - Creates an env-based value secrets. (550 milliseconds)
[info] - Star PageRank (7 seconds, 489 milliseconds)
[info] - Creates file-based reference secrets. (546 milliseconds)
[info] - Creates a file-based value secrets. (928 milliseconds)
[info] MesosClusterSchedulerSuite:
[info] - can queue drivers (49 milliseconds)
[info] - can kill queued drivers (38 milliseconds)
[info] - sorting on mutable pairs (9 seconds, 753 milliseconds)
[info] - can handle multiple roles (58 milliseconds)
[info] - escapes commandline args for the shell (94 milliseconds)
[info] - supports spark.mesos.driverEnv.* (28 milliseconds)
[info] - supports spark.mesos.network.name and spark.mesos.network.labels (45 milliseconds)
[info] - supports setting fetcher cache on the dispatcher (122 milliseconds)
[info] - supports setting fetcher cache in the submission (167 milliseconds)
[info] - supports disabling fetcher cache (116 milliseconds)
[info] - accept/decline offers with driver constraints (158 milliseconds)
[info] - supports spark.mesos.driver.labels (56 milliseconds)
[info] - can kill supervised drivers (68 milliseconds)
[info] - SPARK-27347: do not restart outdated supervised drivers (1 second, 561 milliseconds)
[info] - Declines offer with refuse seconds = 120. (31 milliseconds)
[info] - Creates an env-based reference secrets. (40 milliseconds)
[info] - Creates an env-based value secrets. (39 milliseconds)
[info] - Creates file-based reference secrets. (40 milliseconds)
[info] - Creates a file-based value secrets. (42 milliseconds)
[info] MesosClusterDispatcherSuite:
[info] - prints usage on empty input (51 milliseconds)
[info] - prints usage with only --help (3 milliseconds)
[info] - prints error with unrecognized options (3 milliseconds)
[info] MesosClusterManagerSuite:
[info] - mesos fine-grained (88 milliseconds)
[info] - mesos coarse-grained (656 milliseconds)
[info] - mesos with zookeeper (159 milliseconds)
[info] - mesos with i/o encryption throws error (601 milliseconds)
[info] MesosClusterDispatcherArgumentsSuite:
(spark.testing,true)
(spark.ui.showConsoleProgress,false)
(spark.master.rest.enabled,false)
(spark.test.home,/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6)
(spark.ui.enabled,false)
(spark.unsafe.exceptionOnMemoryLeak,true)
(spark.mesos.key2,value2)
(spark.memory.debugFill,true)
(spark.port.maxRetries,100)
[info] - test if spark config args are passed successfully (11 milliseconds)
(spark.testing,true)
(spark.ui.showConsoleProgress,false)
(spark.master.rest.enabled,false)
(spark.test.home,/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6)
(spark.ui.enabled,false)
(spark.unsafe.exceptionOnMemoryLeak,true)
(spark.memory.debugFill,true)
(spark.port.maxRetries,100)
[info] - test non conf settings (2 milliseconds)
[info] MesosProtoUtilsSuite:
[info] - mesosLabels (1 millisecond)
[info] - Star PersonalPageRank (7 seconds, 257 milliseconds)
[info] - caching on disk (encryption = off) (13 seconds, 97 milliseconds)
[info] KafkaRDDSuite:
[info] - cogroup using mutable pairs (9 seconds, 83 milliseconds)
[info] - subtract mutable pairs (8 seconds, 991 milliseconds)
[info] - caching on disk (encryption = on) (12 seconds, 537 milliseconds)
[info] - caching in memory, replicated (encryption = off) (11 seconds, 156 milliseconds)
[info] - sort with Java non serializable class - Kryo (13 seconds, 118 milliseconds)
[info] - Grid PageRank (25 seconds, 898 milliseconds)
[info] - Chain PageRank (7 seconds, 939 milliseconds)
[info] - sort with Java non serializable class - Java (11 seconds, 733 milliseconds)
[info] - shuffle with different compression settings (SPARK-3426) (1 second, 985 milliseconds)
[info] - [SPARK-4085] rerun map stage if reduce stage cannot find its local shuffle file (752 milliseconds)
[info] - cannot find its local shuffle file if no execution of the stage and rerun shuffle (123 milliseconds)
[info] - caching in memory, replicated (encryption = off) (with replication as stream) (15 seconds, 533 milliseconds)
[info] - metrics for shuffle without aggregation (461 milliseconds)
[info] - metrics for shuffle with aggregation (2 seconds, 180 milliseconds)
[info] - Chain PersonalizedPageRank (11 seconds, 140 milliseconds)
[info] - multiple simultaneous attempts for one task (SPARK-8029) (794 milliseconds)
[info] - caching in memory, replicated (encryption = on) (15 seconds, 415 milliseconds)
[info] - using external shuffle service (15 seconds, 856 milliseconds)
[info] ConfigEntrySuite:
[info] - conf entry: int (1 millisecond)
[info] - conf entry: long (1 millisecond)
[info] - conf entry: double (1 millisecond)
[info] - conf entry: boolean (1 millisecond)
[info] - conf entry: optional (0 milliseconds)
[info] - conf entry: fallback (1 millisecond)
[info] - conf entry: time (1 millisecond)
[info] - conf entry: bytes (1 millisecond)
[info] - conf entry: regex (2 milliseconds)
[info] - conf entry: string seq (2 milliseconds)
[info] - conf entry: int seq (1 millisecond)
[info] - conf entry: transformation (1 millisecond)
[info] - conf entry: checkValue() (2 milliseconds)
[info] - conf entry: valid values check (2 milliseconds)
[info] - conf entry: conversion error (2 milliseconds)
[info] - default value handling is null-safe (1 millisecond)
[info] - variable expansion of spark config entries (570 milliseconds)
[info] - conf entry : default function (4 milliseconds)
[info] - conf entry: alternative keys (2 milliseconds)
[info] - onCreate (2 milliseconds)
[info] InputOutputMetricsSuite:
[info] - input metrics for old hadoop with coalesce (377 milliseconds)
[info] - input metrics with cache and coalesce (518 milliseconds)
[info] - input metrics for new Hadoop API with coalesce (130 milliseconds)
[info] - input metrics when reading text file (65 milliseconds)
[info] - input metrics on records read - simple (58 milliseconds)
[info] - input metrics on records read - more stages (191 milliseconds)
[info] - input metrics on records - New Hadoop API (56 milliseconds)
[info] - input metrics on records read with cache (175 milliseconds)
[info] - input read/write and shuffle read/write metrics all line up (1 second, 189 milliseconds)
[info] - input metrics with interleaved reads (519 milliseconds)
[info] - output metrics on records written (124 milliseconds)
[info] - output metrics on records written - new Hadoop API (165 milliseconds)
[info] - output metrics when writing text file (118 milliseconds)
[info] - input metrics with old CombineFileInputFormat (56 milliseconds)
[info] - input metrics with new CombineFileInputFormat (91 milliseconds)
[info] - input metrics with old Hadoop API in different thread (119 milliseconds)
[info] - input metrics with new Hadoop API in different thread (116 milliseconds)
[info] AppStatusStoreSuite:
[info] - quantile calculation: 1 task (67 milliseconds)
[info] - quantile calculation: few tasks (10 milliseconds)
[info] - quantile calculation: more tasks (39 milliseconds)
[info] - quantile calculation: lots of tasks (120 milliseconds)
[info] - quantile calculation: custom quantiles (50 milliseconds)
[info] - quantile cache (146 milliseconds)
[info] - SPARK-28638: only successful tasks have taskSummary when with in memory kvstore (2 milliseconds)
[info] - SPARK-28638: summary should contain successful tasks only when with in memory kvstore (14 milliseconds)
[info] CountEvaluatorSuite:
[info] - test count 0 (3 milliseconds)
[info] - test count >= 1 (50 milliseconds)
[info] TaskResultGetterSuite:
[info] - caching in memory, replicated (encryption = on) (with replication as stream) (12 seconds, 298 milliseconds)
[info] - handling results smaller than max RPC message size (173 milliseconds)
[info] - handling results larger than max RPC message size (394 milliseconds)
[info] - handling total size of results larger than maxResultSize (127 milliseconds)
[info] - task retried if result missing from block manager (680 milliseconds)
[info] - failed task deserialized with the correct classloader (SPARK-11195) (542 milliseconds)
[info] - task result size is set on the driver, not the executors (118 milliseconds)
Exception in thread "task-result-getter-0" java.lang.NoClassDefFoundError
	at org.apache.spark.scheduler.UndeserializableException.readObject(TaskResultGetterSuite.scala:304)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[info] - failed task is handled when error occurs deserializing the reason (73 milliseconds)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1170)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2178)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
	at org.apache.spark.ThrowableSerializationWrapper.readObject(TaskEndReason.scala:193)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1170)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2178)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
	at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
	at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
	at org.apache.spark.scheduler.TaskResultGetter$$anon$4$$anonfun$run$2.apply$mcV$sp(TaskResultGetter.scala:142)
	at org.apache.spark.scheduler.TaskResultGetter$$anon$4$$anonfun$run$2.apply(TaskResultGetter.scala:138)
	at org.apache.spark.scheduler.TaskResultGetter$$anon$4$$anonfun$run$2.apply(TaskResultGetter.scala:138)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1945)
	at org.apache.spark.scheduler.TaskResultGetter$$anon$4.run(TaskResultGetter.scala:138)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
[info] SerializerPropertiesSuite:
[info] - JavaSerializer does not support relocation (3 milliseconds)
[info] - KryoSerializer supports relocation when auto-reset is enabled (123 milliseconds)
[info] - KryoSerializer does not support relocation when auto-reset is disabled (19 milliseconds)
[info] DriverRunnerTest:
[info] - Process succeeds instantly (93 milliseconds)
[info] - Process failing several times and then succeeding (38 milliseconds)
[info] - Process doesn't restart if not supervised (42 milliseconds)
[info] - Process doesn't restart if killed (39 milliseconds)
[info] - Reset of backoff counter (42 milliseconds)
[info] - Kill process finalized with state KILLED (45 milliseconds)
[info] - Finalized with state FINISHED (41 milliseconds)
[info] - Finalized with state FAILED (41 milliseconds)
[info] - Handle exception starting process (42 milliseconds)
[info] MapOutputTrackerSuite:
[info] - master start and stop (481 milliseconds)
[info] - master register shuffle and fetch (213 milliseconds)
[info] - master register and unregister shuffle (533 milliseconds)
[info] - master register shuffle and unregister map output and fetch (95 milliseconds)
[info] - Loop with source PageRank (27 seconds, 933 milliseconds)
[info] - remote fetch (1 second, 49 milliseconds)
[info] - remote fetch below max RPC message size (175 milliseconds)
[info] - min broadcast size exceeds max RPC message size (38 milliseconds)
[info] - getLocationsWithLargestOutputs with multiple outputs in same machine (826 milliseconds)
[info] - caching in memory, serialized, replicated (encryption = off) (10 seconds, 968 milliseconds)
[info] - caching in memory, serialized, replicated (encryption = off) (with replication as stream) (10 seconds, 499 milliseconds)
[info] - remote fetch using broadcast (19 seconds, 357 milliseconds)
[info] - equally divide map statistics tasks (79 milliseconds)
[info] - zero-sized blocks should be excluded when getMapSizesByExecutorId (171 milliseconds)
[info] PythonBroadcastSuite:
[info] - PythonBroadcast can be serialized with Kryo (SPARK-4882) (373 milliseconds)
[info] ExecutorRunnerTest:
[info] - command includes appId (37 milliseconds)
[info] CompressionCodecSuite:
[info] - default compression codec (314 milliseconds)
[info] - lz4 compression codec (3 milliseconds)
[info] - lz4 compression codec short form (3 milliseconds)
[info] - lz4 supports concatenation of serialized streams (4 milliseconds)
[info] - lzf compression codec (80 milliseconds)
[info] - lzf compression codec short form (2 milliseconds)
[info] - lzf supports concatenation of serialized streams (2 milliseconds)
[info] - snappy compression codec (67 milliseconds)
[info] - snappy compression codec short form (2 milliseconds)
[info] - snappy supports concatenation of serialized streams (2 milliseconds)
[info] - zstd compression codec (24 milliseconds)
[info] - zstd compression codec short form (2 milliseconds)
[info] - zstd supports concatenation of serialized zstd (1 millisecond)
[info] - bad compression codec (2 milliseconds)
[info] MetricsSystemSuite:
[info] - MetricsSystem with default config (3 milliseconds)
[info] - MetricsSystem with sources add (18 milliseconds)
[info] - MetricsSystem with Driver instance (1 millisecond)
[info] - MetricsSystem with Driver instance and spark.app.id is not set (2 milliseconds)
[info] - MetricsSystem with Driver instance and spark.executor.id is not set (17 milliseconds)
[info] - MetricsSystem with Executor instance (1 millisecond)
[info] - MetricsSystem with Executor instance and spark.app.id is not set (1 millisecond)
[info] - MetricsSystem with Executor instance and spark.executor.id is not set (2 milliseconds)
[info] - MetricsSystem with instance which is neither Driver nor Executor (2 milliseconds)
[info] - MetricsSystem with Executor instance, with custom namespace (2 milliseconds)
[info] - MetricsSystem with Executor instance, custom namespace which is not set (2 milliseconds)
[info] - MetricsSystem with Executor instance, custom namespace, spark.executor.id not set (2 milliseconds)
[info] - MetricsSystem with non-driver, non-executor instance with custom namespace (2 milliseconds)
[info] ConfigReaderSuite:
[info] - variable expansion (2 milliseconds)
[info] - circular references (2 milliseconds)
[info] - spark conf provider filters config keys (0 milliseconds)
[info] DoubleRDDSuite:
[info] - sum (47 milliseconds)
[info] - WorksOnEmpty (41 milliseconds)
[info] - WorksWithOutOfRangeWithOneBucket (35 milliseconds)
[info] - WorksInRangeWithOneBucket (41 milliseconds)
[info] - WorksInRangeWithOneBucketExactMatch (36 milliseconds)
[info] - WorksWithOutOfRangeWithTwoBuckets (28 milliseconds)
[info] - WorksWithOutOfRangeWithTwoUnEvenBuckets (15 milliseconds)
[info] - WorksInRangeWithTwoBuckets (34 milliseconds)
[info] - WorksInRangeWithTwoBucketsAndNaN (40 milliseconds)
[info] - WorksInRangeWithTwoUnevenBuckets (44 milliseconds)
[info] - WorksMixedRangeWithTwoUnevenBuckets (15 milliseconds)
[info] - Loop with sink PageRank (25 seconds, 642 milliseconds)
[info] EdgeRDDSuite:
[info] - WorksMixedRangeWithFourUnevenBuckets (15 milliseconds)
[info] - WorksMixedRangeWithUnevenBucketsAndNaN (49 milliseconds)
[info] - WorksMixedRangeWithUnevenBucketsAndNaNAndNaNRange (18 milliseconds)
[info] - WorksMixedRangeWithUnevenBucketsAndNaNAndNaNRangeAndInfinity (15 milliseconds)
[info] - WorksWithOutOfRangeWithInfiniteBuckets (15 milliseconds)
[info] - ThrowsExceptionOnInvalidBucketArray (2 milliseconds)
[info] - WorksWithoutBucketsBasic (81 milliseconds)
[info] - WorksWithoutBucketsBasicSingleElement (28 milliseconds)
[info] - WorksWithoutBucketsBasicNoRange (26 milliseconds)
[info] - cache, getStorageLevel (252 milliseconds)
[info] - WorksWithoutBucketsBasicTwo (30 milliseconds)
[info] - WorksWithDoubleValuesAtMinMax (57 milliseconds)
[info] - WorksWithoutBucketsWithMoreRequestedThanElements (33 milliseconds)
[info] - WorksWithoutBucketsForLargerDatasets (56 milliseconds)
[info] - WorksWithoutBucketsWithNonIntegralBucketEdges (70 milliseconds)
[info] - checkpointing (470 milliseconds)
[info] - count (696 milliseconds)
[info] GraphSuite:
[info] - caching in memory, serialized, replicated (encryption = on) (10 seconds, 175 milliseconds)
[info] - Graph.fromEdgeTuples (910 milliseconds)
[info] - Graph.fromEdges (142 milliseconds)
[info] - WorksWithHugeRange (2 seconds, 257 milliseconds)
[info] - ThrowsExceptionOnInvalidRDDs (37 milliseconds)
[info] NextIteratorSuite:
[info] - one iteration (5 milliseconds)
[info] - two iterations (2 milliseconds)
[info] - empty iteration (1 millisecond)
[info] - close is called once for empty iterations (1 millisecond)
[info] - close is called once for non-empty iterations (1 millisecond)
[info] SparkSubmitSuite:
[info] - Graph.apply (407 milliseconds)
[info] - prints usage on empty input (31 milliseconds)
[info] - prints usage with only --help (3 milliseconds)
[info] - prints error with unrecognized options (2 milliseconds)
[info] - handle binary specified but not class (99 milliseconds)
[info] - handles arguments with --key=val (3 milliseconds)
[info] - handles arguments to user program (1 millisecond)
[info] - handles arguments to user program with name collision (1 millisecond)
[info] - print the right queue name (16 milliseconds)
[info] - SPARK-24241: do not fail fast if executor num is 0 when dynamic allocation is enabled (2 milliseconds)
[info] - specify deploy mode through configuration (265 milliseconds)
[info] - handles YARN cluster mode (36 milliseconds)
[info] - triplets (464 milliseconds)
[info] - handles YARN client mode (84 milliseconds)
[info] - handles standalone cluster mode (27 milliseconds)
[info] - handles legacy standalone cluster mode (23 milliseconds)
[info] - handles standalone client mode (57 milliseconds)
[info] - handles mesos client mode (53 milliseconds)
[info] - handles k8s cluster mode (19 milliseconds)
[info] - handles confs with flag equivalents (21 milliseconds)
[info] - SPARK-21568 ConsoleProgressBar should be enabled only in shells (113 milliseconds)
[info] - caching in memory, serialized, replicated (encryption = on) (with replication as stream) (9 seconds, 49 milliseconds)
[info] - partitionBy (13 seconds, 810 milliseconds)
[info] - launch simple application with spark-submit (13 seconds, 579 milliseconds)
[info] - mapVertices (348 milliseconds)
[info] - mapVertices changing type with same erased type (1 second, 48 milliseconds)
[info] - mapEdges (179 milliseconds)
[info] - mapTriplets (759 milliseconds)
[info] - reverse (567 milliseconds)
[info] - caching on disk, replicated (encryption = off) (9 seconds, 399 milliseconds)
[info] - reverse with join elimination (913 milliseconds)
[info] - subgraph (1 second, 198 milliseconds)
[info] - mask (867 milliseconds)
[info] - groupEdges (918 milliseconds)
[info] - aggregateMessages (526 milliseconds)
[info] - outerJoinVertices (4 seconds, 310 milliseconds)
[info] - launch simple application with spark-submit with redaction (11 seconds, 628 milliseconds)
[info] - more edge partitions than vertex partitions (520 milliseconds)
[info] - checkpoint (741 milliseconds)
[info] - cache, getStorageLevel (173 milliseconds)
[info] - caching on disk, replicated (encryption = off) (with replication as stream) (9 seconds, 670 milliseconds)
[info] - non-default number of edge partitions (547 milliseconds)
[info] - unpersist graph RDD (3 seconds, 679 milliseconds)
[info] - SPARK-14219: pickRandomVertex (458 milliseconds)
[info] ShortestPathsSuite:
[info] - Shortest Path Computations (1 second, 704 milliseconds)
[info] GraphOpsSuite:
[info] - joinVertices (1 second, 230 milliseconds)
[info] - collectNeighborIds (1 second, 6 milliseconds)
[info] - removeSelfEdges (901 milliseconds)
[info] - filter (834 milliseconds)
[info] - convertToCanonicalEdges (371 milliseconds)
[info] - collectEdgesCycleDirectionOut (1 second, 22 milliseconds)
[info] - caching on disk, replicated (encryption = on) (11 seconds, 706 milliseconds)
[info] - collectEdgesCycleDirectionIn (1 second, 421 milliseconds)
[info] - collectEdgesCycleDirectionEither (1 second, 723 milliseconds)
[info] - collectEdgesChainDirectionOut (558 milliseconds)
[info] - collectEdgesChainDirectionIn (838 milliseconds)
[info] - collectEdgesChainDirectionEither (922 milliseconds)
[info] StronglyConnectedComponentsSuite:
[info] - Island Strongly Connected Components (2 seconds, 367 milliseconds)
[info] - includes jars passed in through --jars (23 seconds, 105 milliseconds)
[info] - caching on disk, replicated (encryption = on) (with replication as stream) (10 seconds, 897 milliseconds)
[info] - basic usage (2 minutes, 16 seconds)
[info] - Cycle Strongly Connected Components (4 seconds, 790 milliseconds)
[info] - 2 Cycle Strongly Connected Components (6 seconds, 64 milliseconds)
[info] VertexPartitionSuite:
[info] - isDefined, filter (6 milliseconds)
[info] - map (1 millisecond)
[info] - diff (1 millisecond)
[info] - leftJoin (3 milliseconds)
[info] - innerJoin (3 milliseconds)
[info] - createUsingIndex (1 millisecond)
[info] - innerJoinKeepLeft (1 millisecond)
[info] - aggregateUsingIndex (1 millisecond)
[info] - reindex (1 millisecond)
[info] - serialization (9 milliseconds)
[info] GraphLoaderSuite:
[info] - GraphLoader.edgeListFile (1 second, 13 milliseconds)
[info] TriangleCountSuite:
[info] - Count a single triangle (863 milliseconds)
[info] - Count two triangles (1 second, 383 milliseconds)
[info] - Count two triangles with bi-directed edges (1 second, 401 milliseconds)
[info] - Count a single triangle with duplicate edges (1 second, 732 milliseconds)
[info] GraphGeneratorsSuite:
[info] - GraphGenerators.generateRandomEdges (3 milliseconds)
[info] - GraphGenerators.sampleLogNormal (9 milliseconds)
[info] - GraphGenerators.logNormalGraph (583 milliseconds)
[info] - SPARK-5064 GraphGenerators.rmatGraph numEdges upper bound (648 milliseconds)
[info] SVDPlusPlusSuite:
[info] - caching in memory and disk, replicated (encryption = off) (15 seconds, 393 milliseconds)
[info] - Test SVD++ with mean square error on training set (4 seconds, 525 milliseconds)
[info] - Test SVD++ with no edges (1 second, 67 milliseconds)
[info] FailureTrackerSuite:
[info] - failures expire if validity interval is set (269 milliseconds)
[info] - failures never expire if validity interval is not set (-1) (13 milliseconds)
[info] ClientSuite:
[info] - default Yarn application classpath (61 milliseconds)
[info] - default MR application classpath (2 milliseconds)
[info] - includes jars passed in through --packages (28 seconds, 554 milliseconds)
[info] - resultant classpath for an application that defines a classpath for YARN (323 milliseconds)
[info] - resultant classpath for an application that defines a classpath for MR (19 milliseconds)
[info] - resultant classpath for an application that defines both classpaths, YARN and MR (23 milliseconds)
[info] - Local jar URIs (436 milliseconds)
[info] - Jar path propagation through SparkConf (637 milliseconds)
[info] - Cluster path translation (51 milliseconds)
[info] - caching in memory and disk, replicated (encryption = off) (with replication as stream) (12 seconds, 491 milliseconds)
[info] - configuration and args propagate through createApplicationSubmissionContext (108 milliseconds)
[info] - spark.yarn.jars with multiple paths and globs (248 milliseconds)
[info] - distribute jars archive (161 milliseconds)
[info] - distribute archive multiple times (3 seconds, 240 milliseconds)
[info] - distribute local spark jars (123 milliseconds)
[info] - ignore same name jars (457 milliseconds)
[info] - files URI match test1 (1 millisecond)
[info] - files URI match test2 (0 milliseconds)
[info] - files URI match test3 (0 milliseconds)
[info] - wasb URI match test (0 milliseconds)
[info] - hdfs URI match test (0 milliseconds)
[info] - files URI unmatch test1 (2 milliseconds)
[info] - files URI unmatch test2 (0 milliseconds)
[info] - files URI unmatch test3 (1 millisecond)
[info] - wasb URI unmatch test1 (1 millisecond)
[info] - wasb URI unmatch test2 (0 milliseconds)
[info] - s3 URI unmatch test (0 milliseconds)
[info] - hdfs URI unmatch test1 (0 milliseconds)
[info] - hdfs URI unmatch test2 (1 millisecond)
[info] YarnAllocatorSuite:
[info] - single container allocated (265 milliseconds)
[info] - container should not be created if requested number if met (73 milliseconds)
[info] - some containers allocated (55 milliseconds)
[info] - receive more containers than requested (48 milliseconds)
[info] - decrease total requested executors (56 milliseconds)
[info] - decrease total requested executors to less than currently running (64 milliseconds)
[info] - kill executors (60 milliseconds)
[info] - kill same executor multiple times (47 milliseconds)
[info] - process same completed container multiple times (51 milliseconds)
[info] - lost executor removed from backend (45 milliseconds)
[info] - blacklisted nodes reflected in amClient requests (70 milliseconds)
[info] - memory exceeded diagnostic regexes (1 millisecond)
[info] - window based failure executor counting (62 milliseconds)
[info] - SPARK-26269: YarnAllocator should have same blacklist behaviour with YARN (93 milliseconds)
[info] ClientDistributedCacheManagerSuite:
[info] - test getFileStatus empty (29 milliseconds)
[info] - test getFileStatus cached (1 millisecond)
[info] - test addResource (4 milliseconds)
[info] - test addResource link null (3 milliseconds)
[info] - test addResource appmaster only (2 milliseconds)
[info] - test addResource archive (2 milliseconds)
[info] ExtensionServiceIntegrationSuite:
[info] - Instantiate (8 milliseconds)
[info] - Contains SimpleExtensionService Service (5 milliseconds)
[info] YarnAllocatorBlacklistTrackerSuite:
[info] - expiring its own blacklisted nodes (2 milliseconds)
[info] - not handling the expiry of scheduler blacklisted nodes (2 milliseconds)
[info] - combining scheduler and allocation blacklist (4 milliseconds)
[info] - blacklist all available nodes (3 milliseconds)
[info] YarnClusterSuite:
[info] - caching in memory and disk, replicated (encryption = on) (10 seconds, 899 milliseconds)
[info] - includes jars passed through spark.jars.packages and spark.jars.repositories (20 seconds, 51 milliseconds)
[info] - correctly builds R packages included in a jar with --packages !!! IGNORED !!!
[info] - caching in memory and disk, replicated (encryption = on) (with replication as stream) (8 seconds, 63 milliseconds)
[info] - caching in memory and disk, serialized, replicated (encryption = off) (10 seconds, 408 milliseconds)
[info] - include an external JAR in SparkR (16 seconds, 5 milliseconds)
[info] - resolves command line argument paths correctly (616 milliseconds)
[info] - ambiguous archive mapping results in error message (56 milliseconds)
[info] - resolves config paths correctly (312 milliseconds)
[info] - caching in memory and disk, serialized, replicated (encryption = off) (with replication as stream) (8 seconds, 611 milliseconds)
[info] - user classpath first in driver (5 seconds, 8 milliseconds)
[info] - SPARK_CONF_DIR overrides spark-defaults.conf (7 milliseconds)
[info] - support glob path (44 milliseconds)
[info] - downloadFile - invalid url (41 milliseconds)
[info] - downloadFile - file doesn't exist (34 milliseconds)
[info] - downloadFile does not download local file (24 milliseconds)
[info] - download one file to local (37 milliseconds)
[info] - download list of files to local (41 milliseconds)
[info] - remove copies of application jar from classpath (40 milliseconds)
[info] - Avoid re-upload remote resources in yarn client mode (78 milliseconds)
[info] - download remote resource if it is not supported by yarn service (112 milliseconds)
[info] - avoid downloading remote resource if it is supported by yarn service (109 milliseconds)
[info] - force download from blacklisted schemes (94 milliseconds)
[info] - force download for all the schemes (122 milliseconds)
[info] - start SparkApplication without modifying system properties (98 milliseconds)
[info] - support --py-files/spark.submit.pyFiles in non pyspark application (185 milliseconds)
[info] - handles natural line delimiters in --properties-file and --conf uniformly (48 milliseconds)
[info] NettyRpcEnvSuite:
[info] - send a message locally (4 milliseconds)
[info] - send a message remotely (61 milliseconds)
[info] - send a RpcEndpointRef (2 milliseconds)
[info] - ask a message locally (2 milliseconds)
[info] - ask a message remotely (68 milliseconds)
[info] - ask a message timeout (62 milliseconds)
[info] - onStart and onStop (1 millisecond)
[info] - onError: error in onStart (1 millisecond)
[info] - onError: error in onStop (2 milliseconds)
[info] - onError: error in receive (3 milliseconds)
[info] - self: call in onStart (1 millisecond)
[info] - self: call in receive (8 milliseconds)
[info] - self: call in onStop (2 milliseconds)
[info] - call receive in sequence (362 milliseconds)
[info] - stop(RpcEndpointRef) reentrant (2 milliseconds)
[info] - sendWithReply (2 milliseconds)
[info] - sendWithReply: remotely (68 milliseconds)
[info] - sendWithReply: error (2 milliseconds)
[info] - sendWithReply: remotely error (56 milliseconds)
[info] - network events in sever RpcEnv when another RpcEnv is in server mode (921 milliseconds)
[info] - network events in sever RpcEnv when another RpcEnv is in client mode (207 milliseconds)
[info] - network events in client RpcEnv when another RpcEnv is in server mode (715 milliseconds)
[info] - sendWithReply: unserializable error (571 milliseconds)
[info] - port conflict (57 milliseconds)
[info] - send with authentication (719 milliseconds)
[info] - send with SASL encryption (220 milliseconds)
[info] - send with AES encryption (252 milliseconds)
[info] - ask with authentication (140 milliseconds)
[info] - ask with SASL encryption (234 milliseconds)
[info] - ask with AES encryption (170 milliseconds)
[info] - construct RpcTimeout with conf property (2 milliseconds)
[info] - ask a message timeout on Future using RpcTimeout (26 milliseconds)
[info] - file server (208 milliseconds)
[info] - caching in memory and disk, serialized, replicated (encryption = on) (8 seconds, 830 milliseconds)
[info] - SPARK-14699: RpcEnv.shutdown should not fire onDisconnected events (1 second, 97 milliseconds)
[info] - non-existent endpoint (2 milliseconds)
[info] - advertise address different from bind address (49 milliseconds)
[info] - RequestMessage serialization (9 milliseconds)
Exception in thread "dispatcher-event-loop-0" java.lang.StackOverflowError
	at org.apache.spark.rpc.netty.NettyRpcEnvSuite$$anonfun$5$$anon$1$$anonfun$receiveAndReply$1.applyOrElse(NettyRpcEnvSuite.scala:113)
	at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:105)
	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:205)
	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:101)
	at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:221)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Exception in thread "dispatcher-event-loop-1" java.lang.StackOverflowError
	at org.apache.spark.rpc.netty.NettyRpcEnvSuite$$anonfun$5$$anon$1$$anonfun$receiveAndReply$1.applyOrElse(NettyRpcEnvSuite.scala:113)
	at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:105)
	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:205)
	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:101)
	at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:221)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
[info] - StackOverflowError should be sent back and Dispatcher should survive (53 milliseconds)
[info] JsonProtocolSuite:
[info] - SparkListenerEvent (282 milliseconds)
[info] - Dependent Classes (19 milliseconds)
[info] - ExceptionFailure backward compatibility: full stack trace (3 milliseconds)
[info] - StageInfo backward compatibility (details, accumulables) (3 milliseconds)
[info] - InputMetrics backward compatibility (1 millisecond)
[info] - Input/Output records backwards compatibility (3 milliseconds)
[info] - Shuffle Read/Write records backwards compatibility (2 milliseconds)
[info] - OutputMetrics backward compatibility (2 milliseconds)
[info] - BlockManager events backward compatibility (1 millisecond)
[info] - FetchFailed backwards compatibility (1 millisecond)
[info] - ShuffleReadMetrics: Local bytes read backwards compatibility (1 millisecond)
[info] - SparkListenerApplicationStart backwards compatibility (1 millisecond)
[info] - ExecutorLostFailure backward compatibility (1 millisecond)
[info] - SparkListenerJobStart backward compatibility (3 milliseconds)
[info] - SparkListenerJobStart and SparkListenerJobEnd backward compatibility (3 milliseconds)
[info] - RDDInfo backward compatibility (scope, parent IDs, callsite) (1 millisecond)
[info] - StageInfo backward compatibility (parent IDs) (1 millisecond)
[info] - TaskCommitDenied backward compatibility (98 milliseconds)
[info] - AccumulableInfo backward compatibility (3 milliseconds)
[info] - ExceptionFailure backward compatibility: accumulator updates (12 milliseconds)
[info] - AccumulableInfo value de/serialization (3 milliseconds)
[info] RPackageUtilsSuite:
[info] - run Spark in yarn-client mode (36 seconds, 306 milliseconds)
[info] - pick which jars to unpack using the manifest (296 milliseconds)
[info] - build an R package from a jar end to end (2 seconds, 646 milliseconds)
[info] - jars that don't exist are skipped and print warning (291 milliseconds)
[info] - faulty R package shows documentation (504 milliseconds)
[info] - jars without manifest return false (278 milliseconds)
[info] - SparkR zipping works properly (7 milliseconds)
[info] TopologyMapperSuite:
[info] - File based Topology Mapper (86 milliseconds)
[info] EventLoggingListenerSuite:
[info] - Verify log file exist (49 milliseconds)
[info] - Basic event logging (81 milliseconds)
[info] - Basic event logging with compression (555 milliseconds)
[info] - caching in memory and disk, serialized, replicated (encryption = on) (with replication as stream) (11 seconds, 699 milliseconds)
[info] - End-to-end event logging (9 seconds, 230 milliseconds)
[info] - compute without caching when no partitions fit in memory (13 seconds, 363 milliseconds)
[info] - compute when only some partitions fit in memory (12 seconds, 876 milliseconds)
[info] - compacted topic (2 minutes, 5 seconds)
[info] - passing environment variables to cluster (12 seconds, 67 milliseconds)
[info] - iterator boundary conditions (1 second, 178 milliseconds)
[info] - executor sorting (9 milliseconds)
[info] - End-to-end event logging with compression (36 seconds, 801 milliseconds)
[info] - Event logging with password redaction (33 milliseconds)
[info] - Log overwriting (113 milliseconds)
[info] - Event log name (0 milliseconds)
[info] FileCommitProtocolInstantiationSuite:
[info] - Dynamic partitions require appropriate constructor (2 milliseconds)
[info] - Standard partitions work with classic constructor (0 milliseconds)
[info] - Three arg constructors have priority (1 millisecond)
[info] - Three arg constructors have priority when dynamic (0 milliseconds)
[info] - The protocol must be of the correct class (1 millisecond)
[info] - If there is no matching constructor, class hierarchy is irrelevant (1 millisecond)
[info] JobCancellationSuite:
[info] - local mode, FIFO scheduler (141 milliseconds)
[info] - local mode, fair scheduler (156 milliseconds)
[info] DirectKafkaStreamSuite:
[info] - cluster mode, FIFO scheduler (4 seconds, 178 milliseconds)
[info] - recover from node failures (8 seconds, 50 milliseconds)
[info] - basic stream receiving with multiple topics and smallest starting offset (3 seconds, 33 milliseconds)
[info] - cluster mode, fair scheduler (3 seconds, 615 milliseconds)
[info] - do not put partially executed partitions into cache (162 milliseconds)
[info] - job group (85 milliseconds)
[info] - inherited job group (SPARK-6629) (120 milliseconds)
[info] - job group with interruption (162 milliseconds)
[info] - pattern based subscription (2 seconds, 847 milliseconds)
[info] - receiving from largest starting offset (911 milliseconds)
[info] - creating stream by offset (932 milliseconds)
[info] - run Spark in yarn-cluster mode *** FAILED *** (1 minute, 5 seconds)
[info]   FAILED did not equal FINISHED (stdout/stderr was not captured) (BaseYarnClusterSuite.scala:201)
[info]   org.scalatest.exceptions.TestFailedException:
[info]   at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:528)
[info]   at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1560)
[info]   at org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:501)
[info]   at org.apache.spark.deploy.yarn.BaseYarnClusterSuite.checkResult(BaseYarnClusterSuite.scala:201)
[info]   at org.apache.spark.deploy.yarn.YarnClusterSuite.org$apache$spark$deploy$yarn$YarnClusterSuite$$testBasicYarnApp(YarnClusterSuite.scala:242)
[info]   at org.apache.spark.deploy.yarn.YarnClusterSuite$$anonfun$2.apply$mcV$sp(YarnClusterSuite.scala:88)
[info]   at org.apache.spark.deploy.yarn.YarnClusterSuite$$anonfun$2.apply(YarnClusterSuite.scala:88)
[info]   at org.apache.spark.deploy.yarn.YarnClusterSuite$$anonfun$2.apply(YarnClusterSuite.scala:88)
[info]   at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[info]   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info]   at org.scalatest.Transformer.apply(Transformer.scala:22)
[info]   at org.scalatest.Transformer.apply(Transformer.scala:20)
[info]   at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
[info]   at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:147)
[info]   at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:183)
[info]   at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:196)
[info]   at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:196)
[info]   at org.scalatest.SuperEngine.runTestImpl(Engine.scala:289)
[info]   at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:196)
[info]   at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:54)
[info]   at org.scalatest.BeforeAndAfterEach$class.runTest(BeforeAndAfterEach.scala:221)
[info]   at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:54)
[info]   at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:229)
[info]   at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:229)
[info]   at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:396)
[info]   at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:384)
[info]   at scala.collection.immutable.List.foreach(List.scala:392)
[info]   at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:384)
[info]   at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:379)
[info]   at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:461)
[info]   at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:229)
[info]   at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
[info]   at org.scalatest.Suite$class.run(Suite.scala:1147)
[info]   at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
[info]   at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:233)
[info]   at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:233)
[info]   at org.scalatest.SuperEngine.runImpl(Engine.scala:521)
[info]   at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:233)
[info]   at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:54)
[info]   at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:213)
[info]   at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:210)
[info]   at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:54)
[info]   at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:314)
[info]   at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:480)
[info]   at sbt.ForkMain$Run$2.call(ForkMain.java:296)
[info]   at sbt.ForkMain$Run$2.call(ForkMain.java:286)
[info]   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info]   at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[info]   at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[info]   at java.lang.Thread.run(Thread.java:748)
[info] - offset recovery (6 seconds, 97 milliseconds)
[info] - recover from repeated node failures during shuffle-map (11 seconds, 392 milliseconds)
[info] - offset recovery from kafka *** FAILED *** (1 second, 834 milliseconds)
[info]   0 was not greater than 0 (DirectKafkaStreamSuite.scala:477)
[info]   org.scalatest.exceptions.TestFailedException:
[info]   at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:528)
[info]   at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1560)
[info]   at org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:501)
[info]   at org.apache.spark.streaming.kafka010.DirectKafkaStreamSuite$$anonfun$6$$anonfun$apply$mcV$sp$27.apply(DirectKafkaStreamSuite.scala:477)
[info]   at org.apache.spark.streaming.kafka010.DirectKafkaStreamSuite$$anonfun$6$$anonfun$apply$mcV$sp$27.apply(DirectKafkaStreamSuite.scala:474)
[info]   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
[info]   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
[info]   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
[info]   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
[info]   at org.apache.spark.streaming.kafka010.DirectKafkaStreamSuite$$anonfun$6.apply$mcV$sp(DirectKafkaStreamSuite.scala:474)
[info]   at org.apache.spark.streaming.kafka010.DirectKafkaStreamSuite$$anonfun$6.apply(DirectKafkaStreamSuite.scala:417)
[info]   at org.apache.spark.streaming.kafka010.DirectKafkaStreamSuite$$anonfun$6.apply(DirectKafkaStreamSuite.scala:417)
[info]   at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[info]   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info]   at org.scalatest.Transformer.apply(Transformer.scala:22)
[info]   at org.scalatest.Transformer.apply(Transformer.scala:20)
[info]   at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
[info]   at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:147)
[info]   at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:183)
[info]   at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:196)
[info]   at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:196)
[info]   at org.scalatest.SuperEngine.runTestImpl(Engine.scala:289)
[info]   at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:196)
[info]   at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:54)
[info]   at org.scalatest.BeforeAndAfterEach$class.runTest(BeforeAndAfterEach.scala:221)
[info]   at org.apache.spark.streaming.kafka010.DirectKafkaStreamSuite.org$scalatest$BeforeAndAfter$$super$runTest(DirectKafkaStreamSuite.scala:47)
[info]   at org.scalatest.BeforeAndAfter$class.runTest(BeforeAndAfter.scala:203)
[info]   at org.apache.spark.streaming.kafka010.DirectKafkaStreamSuite.runTest(DirectKafkaStreamSuite.scala:47)
[info]   at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:229)
[info]   at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:229)
[info]   at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:396)
[info]   at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:384)
[info]   at scala.collection.immutable.List.foreach(List.scala:392)
[info]   at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:384)
[info]   at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:379)
[info]   at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:461)
[info]   at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:229)
[info]   at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
[info]   at org.scalatest.Suite$class.run(Suite.scala:1147)
[info]   at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
[info]   at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:233)
[info]   at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:233)
[info]   at org.scalatest.SuperEngine.runImpl(Engine.scala:521)
[info]   at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:233)
[info]   at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:54)
[info]   at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:213)
[info]   at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:210)
[info]   at org.apache.spark.streaming.kafka010.DirectKafkaStreamSuite.org$scalatest$BeforeAndAfter$$super$run(DirectKafkaStreamSuite.scala:47)
[info]   at org.scalatest.BeforeAndAfter$class.run(BeforeAndAfter.scala:258)
[info]   at org.apache.spark.streaming.kafka010.DirectKafkaStreamSuite.run(DirectKafkaStreamSuite.scala:47)
[info]   at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:314)
[info]   at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:480)
[info]   at sbt.ForkMain$Run$2.call(ForkMain.java:296)
[info]   at sbt.ForkMain$Run$2.call(ForkMain.java:286)
[info]   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info]   at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[info]   at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[info]   at java.lang.Thread.run(Thread.java:748)
[info] - Direct Kafka stream report input information (975 milliseconds)
[info] - maxMessagesPerPartition with backpressure disabled (166 milliseconds)
[info] - maxMessagesPerPartition with no lag (93 milliseconds)
[info] - maxMessagesPerPartition respects max rate (105 milliseconds)
[info] - using rate controller (5 seconds, 88 milliseconds)
[info] - backpressure.initialRate should honor maxRatePerPartition (2 seconds, 17 milliseconds)
[info] - task reaper kills JVM if killed tasks keep running for too long (19 seconds, 204 milliseconds)
[info] - use backpressure.initialRate with backpressure (1 second, 756 milliseconds)
[info] - maxMessagesPerPartition with zero offset and rate equal to the specified minimum with default 1 (69 milliseconds)
[info] KafkaDataConsumerSuite:
[info] - KafkaDataConsumer reuse in case of same groupId and TopicPartition (7 milliseconds)
[info] - concurrent use of KafkaDataConsumer (2 seconds, 53 milliseconds)
[info] - run Spark in yarn-client mode with different configurations, ensuring redaction (22 seconds, 38 milliseconds)
[info] Test run started
[info] Test org.apache.spark.streaming.kafka010.JavaLocationStrategySuite.testLocationStrategyConstructors started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.008s
[info] Test run started
[info] Test org.apache.spark.streaming.kafka010.JavaKafkaRDDSuite.testKafkaRDD started
[info] - task reaper will not kill JVM if spark.task.killTimeout == -1 (13 seconds, 562 milliseconds)
[info] - two jobs sharing the same stage (104 milliseconds)
[info] - interruptible iterator of shuffle reader (192 milliseconds)
[info] TaskContextSuite:
[info] - provide metrics sources (81 milliseconds)
[info] - calls TaskCompletionListener after failure (84 milliseconds)
[info] - calls TaskFailureListeners after failure (99 milliseconds)
[info] - all TaskCompletionListeners should be called even if some fail (15 milliseconds)
[info] - all TaskFailureListeners should be called even if some fail (9 milliseconds)
[info] - TaskContext.attemptNumber should return attempt number, not task id (SPARK-4014) (116 milliseconds)
[info] - TaskContext.stageAttemptNumber getter (584 milliseconds)
[info] - accumulators are updated on exception failures (156 milliseconds)
[info] - failed tasks collect only accumulators whose values count during failures (89 milliseconds)
[info] - only updated internal accumulators will be sent back to driver (69 milliseconds)
[info] - localProperties are propagated to executors correctly (111 milliseconds)
[info] - immediately call a completion listener if the context is completed (2 milliseconds)
[info] - immediately call a failure listener if the context has failed (1 millisecond)
[info] - TaskCompletionListenerException.getMessage should include previousError (1 millisecond)
[info] - all TaskCompletionListeners should be called even if some fail or a task (6 milliseconds)
[info] Test run finished: 0 failed, 0 ignored, 1 total, 3.927s
[info] Test run started
[info] Test org.apache.spark.streaming.kafka010.JavaConsumerStrategySuite.testConsumerStrategyConstructors started
[info] Test run finished: 0 failed, 0 ignored, 1 total, 0.001s
[info] Test run started
[info] Test org.apache.spark.streaming.kafka010.JavaDirectKafkaStreamSuite.testKafkaStream started
[info] DAGSchedulerSuite:
[info] - [SPARK-3353] parent stage should have lower stage id (75 milliseconds)
[info] - [SPARK-13902] Ensure no duplicate stages are created (26 milliseconds)
[info] - All shuffle files on the slave should be cleaned up when slave lost (148 milliseconds)
[info] - zero split job (7 milliseconds)
[info] - run trivial job (6 milliseconds)
[info] - run trivial job w/ dependency (6 milliseconds)
[info] - equals and hashCode AccumulableInfo (1 millisecond)
[info] - cache location preferences w/ dependency (9 milliseconds)
[info] - regression test for getCacheLocs (25 milliseconds)
[info] - getMissingParentStages should consider all ancestor RDDs' cache statuses (6 milliseconds)
[info] - avoid exponential blowup when getting preferred locs list (64 milliseconds)
[info] - unserializable task (9 milliseconds)
[info] - trivial job failure (4 milliseconds)
[info] - trivial job cancellation (5 milliseconds)
[info] - job cancellation no-kill backend (50 milliseconds)
[info] - run trivial shuffle (12 milliseconds)
[info] - run trivial shuffle with fetch failure (24 milliseconds)
[info] - shuffle files not lost when slave lost with shuffle service (126 milliseconds)
[info] - shuffle files lost when worker lost with shuffle service (139 milliseconds)
[info] - shuffle files lost when worker lost without shuffle service (115 milliseconds)
[info] - shuffle files not lost when executor failure with shuffle service (123 milliseconds)
[info] - shuffle files lost when executor failure without shuffle service (99 milliseconds)
[info] - Single stage fetch failure should not abort the stage. (48 milliseconds)
[info] - Multiple consecutive stage fetch failures should lead to job being aborted. (38 milliseconds)
[info] - Failures in different stages should not trigger an overall abort (63 milliseconds)
[info] - Non-consecutive stage failures don't trigger abort (65 milliseconds)
[info] - trivial shuffle with multiple fetch failures (20 milliseconds)
[info] - Retry all the tasks on a resubmitted attempt of a barrier stage caused by FetchFailure (33 milliseconds)
[info] - Retry all the tasks on a resubmitted attempt of a barrier stage caused by TaskKilled (32 milliseconds)
[info] - Fail the job if a barrier ResultTask failed (13 milliseconds)
[info] - late fetch failures don't cause multiple concurrent attempts for the same map stage (17 milliseconds)
[info] - extremely late fetch failures don't cause multiple concurrent attempts for the same stage (32 milliseconds)
[info] Test run finished: 0 failed, 0 ignored, 1 total, 5.018s
[info] - task events always posted in speculation / when stage is killed (38 milliseconds)
[info] - ignore late map task completions (12 milliseconds)
[info] - run shuffle with map stage failure (7 milliseconds)
[info] - shuffle fetch failure in a reused shuffle dependency (34 milliseconds)
[info] - don't submit stage until its dependencies map outputs are registered (SPARK-5259) (30 milliseconds)
[info] - register map outputs correctly after ExecutorLost and task Resubmitted (25 milliseconds)
[info] - failure of stage used by two jobs (10 milliseconds)
[info] ExecutorPodsSnapshotSuite:
[info] - stage used by two jobs, the first no longer active (SPARK-6880) (20 milliseconds)
[info] - stage used by two jobs, some fetch failures, and the first job no longer active (SPARK-6880) (35 milliseconds)
[info] - States are interpreted correctly from pod metadata. (233 milliseconds)
[info] - Updates add new pods for non-matching ids and edit existing pods for matching ids (6 milliseconds)
[info] EnvSecretsFeatureStepSuite:
[info] - run trivial shuffle with out-of-band executor failure and retry (19 milliseconds)
[info] - sets up all keyRefs (26 milliseconds)
[info] RDriverFeatureStepSuite:
[info] - R Step modifies container correctly (89 milliseconds)
[info] ExecutorPodsPollingSnapshotSourceSuite:
[info] - recursive shuffle failures (29 milliseconds)
[info] - cached post-shuffle (38 milliseconds)
[info] - Items returned by the API should be pushed to the event queue (17 milliseconds)
[info] BasicExecutorFeatureStepSuite:
[info] - basic executor pod has reasonable defaults (49 milliseconds)
[info] - executor pod hostnames get truncated to 63 characters (4 milliseconds)
[info] - classpath and extra java options get translated into environment variables (4 milliseconds)
[info] - test executor pyspark memory (5 milliseconds)
[info] DriverKubernetesCredentialsFeatureStepSuite:
[info] - misbehaved accumulator should not crash DAGScheduler and SparkContext (37 milliseconds)
[info] - Don't set any credentials (12 milliseconds)
[info] - Only set credentials that are manually mounted. (3 milliseconds)
[info] - Mount credentials from the submission client as a secret. (67 milliseconds)
[info] ClientSuite:
[info] - misbehaved accumulator should not impact other accumulators (22 milliseconds)
[info] - The client should configure the pod using the builder. (15 milliseconds)
[info] - The client should create Kubernetes resources (4 milliseconds)
[info] - Waiting for app completion should stall on the watcher (3 milliseconds)
[info] DriverServiceFeatureStepSuite:
[info] - Headless service has a port for the driver RPC and the block manager. (17 milliseconds)
[info] - Hostname and ports are set according to the service name. (1 millisecond)
[info] - Ports should resolve to defaults in SparkConf and in the service. (2 milliseconds)
[info] - Long prefixes should switch to using a generated name. (3 milliseconds)
[info] - Disallow bind address and driver host to be set explicitly. (1 millisecond)
[info] - misbehaved resultHandler should not crash DAGScheduler and SparkContext (22 milliseconds)
[info] KubernetesDriverBuilderSuite:
[info] - Apply fundamental steps all the time. (8 milliseconds)
[info] - Apply secrets step if secrets are present. (3 milliseconds)
[info] - Apply Java step if main resource is none. (3 milliseconds)
[info] - Apply Python step if main resource is python. (5 milliseconds)
[info] - Apply volumes step if mounts are present. (5 milliseconds)
[info] - Apply R step if main resource is R. (4 milliseconds)
[info] ExecutorPodsAllocatorSuite:
[info] - Initially request executors in batches. Do not request another batch if the first has not finished. (128 milliseconds)
[info] - getPartitions exceptions should not crash DAGScheduler and SparkContext (SPARK-8606) (120 milliseconds)
[info] - Request executors in batches. Allow another batch to be requested if all pending executors start running. (14 milliseconds)
[info] - When a current batch reaches error states immediately, re-request them on the next batch. (13 milliseconds)
[info] - When an executor is requested but the API does not report it in a reasonable time, retry requesting that executor. (5 milliseconds)
[info] KubernetesClusterSchedulerBackendSuite:
[info] - getPreferredLocations errors should not crash DAGScheduler and SparkContext (SPARK-8606) (25 milliseconds)
[info] - Start all components (4 milliseconds)
[info] - Stop all components (10 milliseconds)
[info] - Remove executor (4 milliseconds)
[info] - Kill executors (10 milliseconds)
[info] - Request total executors (3 milliseconds)
[info] KubernetesConfSuite:
[info] - Basic driver translated fields. (5 milliseconds)
[info] - Creating driver conf with and without the main app jar influences spark.jars (5 milliseconds)
[info] - Creating driver conf with a python primary file (2 milliseconds)
[info] - Creating driver conf with a r primary file (1 millisecond)
[info] - Testing explicit setting of memory overhead on non-JVM tasks (1 millisecond)
[info] - Resolve driver labels, annotations, secret mount paths, envs, and memory overhead (3 milliseconds)
[info] - Basic executor translated fields. (1 millisecond)
[info] - accumulator not calculated for resubmitted result stage (9 milliseconds)
[info] - Image pull secrets. (1 millisecond)
[info] - Set executor labels, annotations, and secrets (3 milliseconds)
[info] KubernetesVolumeUtilsSuite:
[info] - Parses hostPath volumes correctly (4 milliseconds)
[info] - Parses persistentVolumeClaim volumes correctly (1 millisecond)
[info] - Parses emptyDir volumes correctly (1 millisecond)
[info] - Parses emptyDir volume options can be optional (0 milliseconds)
[info] - Defaults optional readOnly to false (1 millisecond)
[info] - Gracefully fails on missing mount key (1 millisecond)
[info] - Gracefully fails on missing option key (1 millisecond)
[info] BasicDriverFeatureStepSuite:
[info] - Check the pod respects all configurations from the user. (11 milliseconds)
[info] - Check appropriate entrypoint rerouting for various bindings (3 milliseconds)
[info] - Additional system properties resolve jars and set cluster-mode confs. (2 milliseconds)
[info] ExecutorPodsSnapshotsStoreSuite:
[info] - Subscribers get notified of events periodically. (7 milliseconds)
[info] - Even without sending events, initially receive an empty buffer. (1 millisecond)
[info] - Replacing the snapshot passes the new snapshot to subscribers. (2 milliseconds)
[info] MountVolumesFeatureStepSuite:
[info] - Mounts hostPath volumes (5 milliseconds)
[info] - Mounts pesistentVolumeClaims (5 milliseconds)
[info] - Mounts emptyDir (4 milliseconds)
[info] - Mounts emptyDir with no options (1 millisecond)
[info] - Mounts multiple volumes (2 milliseconds)
[info] MountSecretsFeatureStepSuite:
[info] - mounts all given secrets (6 milliseconds)
[info] ExecutorPodsLifecycleManagerSuite:
[info] - accumulator not calculated for resubmitted task in result stage (7 milliseconds)
[info] - When an executor reaches error states immediately, remove from the scheduler backend. (13 milliseconds)
[info] - Don't remove executors twice from Spark but remove from K8s repeatedly. (4 milliseconds)
[info] - When the scheduler backend lists executor ids that aren't present in the cluster, remove those executors from Spark. (3 milliseconds)
[info] JavaDriverFeatureStepSuite:
[info] - Java Step modifies container correctly (2 milliseconds)
[info] ExecutorPodsWatchSnapshotSourceSuite:
[info] - Watch events should be pushed to the snapshots store as snapshot updates. (4 milliseconds)
[info] LocalDirsFeatureStepSuite:
[info] - Resolve to default local dir if neither env nor configuration are set (25 milliseconds)
[info] - Use configured local dirs split on comma if provided. (2 milliseconds)
[info] PythonDriverFeatureStepSuite:
[info] - accumulators are updated on exception failures and task killed (8 milliseconds)
[info] - Python Step modifies container correctly (3 milliseconds)
[info] - Python Step testing empty pyfiles (1 millisecond)
[info] KubernetesExecutorBuilderSuite:
[info] - Basic steps are consistently applied. (5 milliseconds)
[info] - Apply secrets step if secrets are present. (2 milliseconds)
[info] - Apply volumes step if mounts are present. (2 milliseconds)
[info] - reduce tasks should be placed locally with map output (12 milliseconds)
[info] - reduce task locality preferences should only include machines with largest map outputs (11 milliseconds)
[info] - stages with both narrow and shuffle dependencies use narrow ones for locality (12 milliseconds)
[info] - Spark exceptions should include call site in stack trace (40 milliseconds)
[info] - catch errors in event loop (7 milliseconds)
[info] - simple map stage submission (26 milliseconds)
[info] - map stage submission with reduce stage also depending on the data (16 milliseconds)
[info] - map stage submission with fetch failure (25 milliseconds)
[info] - map stage submission with multiple shared stages and failures (42 milliseconds)
[info] ReceiverTrackerSuite:
[info] - Trigger mapstage's job listener in submitMissingTasks (20 milliseconds)
[info] - map stage submission with executor failure late map task completions (16 milliseconds)
[info] - getShuffleDependencies correctly returns only direct shuffle parents (1 millisecond)
[info] - SPARK-17644: After one stage is aborted for too many failed attempts, subsequent stagesstill behave correctly on fetch failures (1 second, 505 milliseconds)
[info] - [SPARK-19263] DAGScheduler should not submit multiple active tasksets, even with late completions from earlier stage attempts (33 milliseconds)
[info] - task end event should have updated accumulators (SPARK-20342) (197 milliseconds)
[info] - Barrier task failures from the same stage attempt don't trigger multiple stage retries (14 milliseconds)
[info] - Barrier task failures from a previous stage attempt don't trigger stage retry (21 milliseconds)
[info] - SPARK-23207: retry all the succeeding stages when the map stage is indeterminate (12 milliseconds)
[info] - SPARK-29042: Sampled RDD with unordered input should be indeterminate (5 milliseconds)
[info] - send rate update to receivers (3 seconds, 115 milliseconds)
[info] - SPARK-23207: cannot rollback a result stage (155 milliseconds)
[info] - SPARK-23207: local checkpoint fail to rollback (checkpointed before) (720 milliseconds)
[info] - SPARK-23207: local checkpoint fail to rollback (checkpointing now) (10 milliseconds)
[info] - SPARK-23207: reliable checkpoint can avoid rollback (checkpointed before) (73 milliseconds)
[info] - SPARK-23207: reliable checkpoint fail to rollback (checkpointing now) (24 milliseconds)
[info] - SPARK-28699: abort stage if parent stage is indeterminate stage (10 milliseconds)
[info] - should restart receiver after stopping it (1 second, 668 milliseconds)
[info] PrefixComparatorsSuite:
[info] - String prefix comparator (215 milliseconds)
[info] - Binary prefix comparator (10 milliseconds)
[info] - double prefix comparator handles NaNs properly (1 millisecond)
[info] - double prefix comparator handles negative NaNs properly (0 milliseconds)
[info] - double prefix comparator handles other special values properly (1 millisecond)
[info] MasterWebUISuite:
[info] - SPARK-11063: TaskSetManager should use Receiver RDD's preferredLocations (588 milliseconds)
[info] - kill application (286 milliseconds)
[info] - kill driver (111 milliseconds)
[info] SorterSuite:
[info] - recover from repeated node failures during shuffle-reduce (41 seconds, 409 milliseconds)
[info] - equivalent to Arrays.sort (54 milliseconds)
[info] - KVArraySorter (81 milliseconds)
[info] - get allocated executors (709 milliseconds)
[info] RateLimitedOutputStreamSuite:
[info] - write (4 seconds, 163 milliseconds)
[info] RecurringTimerSuite:
[info] - basic (4 milliseconds)
[info] - SPARK-10224: call 'callback' after stopping (9 milliseconds)
[info] InputStreamsSuite:
Exception in thread "receiver-supervisor-future-0" java.lang.Error: java.lang.InterruptedException: sleep interrupted
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1155)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.InterruptedException: sleep interrupted
	at java.lang.Thread.sleep(Native Method)
	at org.apache.spark.streaming.receiver.ReceiverSupervisor$$anonfun$restartReceiver$1.apply$mcV$sp(ReceiverSupervisor.scala:196)
	at org.apache.spark.streaming.receiver.ReceiverSupervisor$$anonfun$restartReceiver$1.apply(ReceiverSupervisor.scala:189)
	at org.apache.spark.streaming.receiver.ReceiverSupervisor$$anonfun$restartReceiver$1.apply(ReceiverSupervisor.scala:189)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	... 2 more
[info] - socket input stream (882 milliseconds)
[info] - socket input stream - no block in a batch (583 milliseconds)
[info] - recover from node failures with replication (9 seconds, 658 milliseconds)
[info] - binary records stream (6 seconds, 341 milliseconds)
[info] - file input stream - newFilesOnly = true (660 milliseconds)
[info] - run Spark in yarn-cluster mode with different configurations, ensuring redaction *** FAILED *** (36 seconds, 33 milliseconds)
[info]   FAILED did not equal FINISHED (stdout/stderr was not captured) (BaseYarnClusterSuite.scala:201)
[info]   org.scalatest.exceptions.TestFailedException:
[info]   at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:528)
[info]   at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1560)
[info]   at org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:501)
[info]   at org.apache.spark.deploy.yarn.BaseYarnClusterSuite.checkResult(BaseYarnClusterSuite.scala:201)
[info]   at org.apache.spark.deploy.yarn.YarnClusterSuite.org$apache$spark$deploy$yarn$YarnClusterSuite$$testBasicYarnApp(YarnClusterSuite.scala:242)
[info]   at org.apache.spark.deploy.yarn.YarnClusterSuite$$anonfun$4.apply$mcV$sp(YarnClusterSuite.scala:105)
[info]   at org.apache.spark.deploy.yarn.YarnClusterSuite$$anonfun$4.apply(YarnClusterSuite.scala:105)
[info]   at org.apache.spark.deploy.yarn.YarnClusterSuite$$anonfun$4.apply(YarnClusterSuite.scala:105)
[info]   at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
[info]   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info]   at org.scalatest.Transformer.apply(Transformer.scala:22)
[info]   at org.scalatest.Transformer.apply(Transformer.scala:20)
[info]   at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
[info]   at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:147)
[info]   at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:183)
[info]   at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:196)
[info]   at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:196)
[info]   at org.scalatest.SuperEngine.runTestImpl(Engine.scala:289)
[info]   at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:196)
[info]   at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:54)
[info]   at org.scalatest.BeforeAndAfterEach$class.runTest(BeforeAndAfterEach.scala:221)
[info]   at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:54)
[info]   at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:229)
[info]   at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:229)
[info]   at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:396)
[info]   at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:384)
[info]   at scala.collection.immutable.List.foreach(List.scala:392)
[info]   at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:384)
[info]   at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:379)
[info]   at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:461)
[info]   at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:229)
[info]   at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
[info]   at org.scalatest.Suite$class.run(Suite.scala:1147)
[info]   at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
[info]   at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:233)
[info]   at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:233)
[info]   at org.scalatest.SuperEngine.runImpl(Engine.scala:521)
[info]   at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:233)
[info]   at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:54)
[info]   at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:213)
[info]   at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:210)
[info]   at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:54)
[info]   at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:314)
[info]   at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:480)
[info]   at sbt.ForkMain$Run$2.call(ForkMain.java:296)
[info]   at sbt.ForkMain$Run$2.call(ForkMain.java:286)
[info]   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info]   at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[info]   at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[info]   at java.lang.Thread.run(Thread.java:748)
[info] - file input stream - newFilesOnly = false (877 milliseconds)
[info] - unpersist RDDs (4 seconds, 401 milliseconds)
[info] - file input stream - wildcard (1 second, 60 milliseconds)
[info] - multi-thread receiver (2 seconds, 64 milliseconds)
[info] - reference partitions inside a task (3 seconds, 261 milliseconds)
[info] - queue input stream - oneAtATime = true (1 second, 116 milliseconds)
[info] SparkAWSCredentialsBuilderSuite:
[info] - should build DefaultCredentials when given no params (28 milliseconds)
[info] - should build BasicCredentials (3 milliseconds)
[info] - should build STSCredentials (3 milliseconds)
[info] - SparkAWSCredentials classes should be serializable (6 milliseconds)
[info] KinesisCheckpointerSuite:
[info] - checkpoint is not called twice for the same sequence number (30 milliseconds)
[info] - checkpoint is called after sequence number increases (2 milliseconds)
[info] - should checkpoint if we have exceeded the checkpoint interval (12 milliseconds)
[info] - shouldn't checkpoint if we have not exceeded the checkpoint interval (1 millisecond)
[info] - should not checkpoint for the same sequence number (3 milliseconds)
[info] - removing checkpointer checkpoints one last time (1 millisecond)
[info] - if checkpointing is going on, wait until finished before removing and checkpointing (84 milliseconds)
[info] - queue input stream - oneAtATime = false (2 seconds, 115 milliseconds)
[info] - SPARK-5984 TimSort bug (20 seconds, 342 milliseconds)
[info] - Sorter benchmark for key-value pairs !!! IGNORED !!!
[info] - Sorter benchmark for primitive int array !!! IGNORED !!!
[info] RandomSamplerSuite:
[info] - utilities (8 milliseconds)
[info] - test track the number of input stream (98 milliseconds)
[info] WriteAheadLogUtilsSuite:
[info] - log selection and creation (37 milliseconds)
[info] - wrap WriteAheadLog in BatchedWriteAheadLog when batching is enabled (3 milliseconds)
[info] - batching is enabled by default in WriteAheadLog (0 milliseconds)
[info] - closeFileAfterWrite is disabled by default in WriteAheadLog (0 milliseconds)
[info] - sanity check medianKSD against references (132 milliseconds)
[info] ReceiverSchedulingPolicySuite:
[info] - rescheduleReceiver: empty executors (1 millisecond)
[info] - rescheduleReceiver: receiver preferredLocation (1 millisecond)
[info] - rescheduleReceiver: return all idle executors if there are any idle executors (5 milliseconds)
[info] - rescheduleReceiver: return all executors that have minimum weight if no idle executors (5 milliseconds)
[info] - scheduleReceivers: schedule receivers evenly when there are more receivers than executors (4 milliseconds)
[info] - scheduleReceivers: schedule receivers evenly when there are more executors than receivers (4 milliseconds)
[info] - scheduleReceivers: schedule receivers evenly when the preferredLocations are even (6 milliseconds)
[info] - scheduleReceivers: return empty if no receiver (0 milliseconds)
[info] - scheduleReceivers: return empty scheduled executors if no executors (2 milliseconds)
[info] - bernoulli sampling (62 milliseconds)
[info] PIDRateEstimatorSuite:
[info] - the right estimator is created (41 milliseconds)
[info] - bernoulli sampling without iterator (111 milliseconds)
[info] - estimator checks ranges (4 milliseconds)
[info] - first estimate is None (3 milliseconds)
[info] - second estimate is not None (1 millisecond)
[info] - no estimate when no time difference between successive calls (2 milliseconds)
[info] - no estimate when no records in previous batch (1 millisecond)
[info] - no estimate when there is no processing delay (0 milliseconds)
[info] - estimate is never less than min rate (29 milliseconds)
[info] - with no accumulated or positive error, |I| > 0, follow the processing speed (4 milliseconds)
[info] - with no accumulated but some positive error, |I| > 0, follow the processing speed (4 milliseconds)
[info] - with some accumulated and some positive error, |I| > 0, stay below the processing speed (24 milliseconds)
[info] - bernoulli sampling with gap sampling optimization (122 milliseconds)
[info] ReceivedBlockHandlerSuite:
[info] - bernoulli sampling (without iterator) with gap sampling optimization (87 milliseconds)
[info] - bernoulli boundary cases (1 millisecond)
[info] - bernoulli (without iterator) boundary cases (2 milliseconds)
[info] - bernoulli data types (89 milliseconds)
[info] - bernoulli clone (27 milliseconds)
[info] - bernoulli set seed (42 milliseconds)
[info] - replacement sampling (59 milliseconds)
[info] - replacement sampling without iterator (64 milliseconds)
[info] KinesisInputDStreamBuilderSuite:
[info] - replacement sampling with gap sampling (141 milliseconds)
[info] - should raise an exception if the StreamingContext is missing (4 milliseconds)
[info] - should raise an exception if the stream name is missing (6 milliseconds)
[info] - should raise an exception if the checkpoint app name is missing (3 milliseconds)
[info] - replacement sampling (without iterator) with gap sampling (157 milliseconds)
[info] - replacement boundary cases (1 millisecond)
[info] - replacement (without) boundary cases (1 millisecond)
[info] - replacement data types (87 milliseconds)
[info] - replacement clone (31 milliseconds)
[info] - should propagate required values to KinesisInputDStream (272 milliseconds)
[info] - should propagate default values to KinesisInputDStream (4 milliseconds)
[info] - should propagate custom non-auth values to KinesisInputDStream (14 milliseconds)
[info] - old Api should throw UnsupportedOperationExceptionexception with AT_TIMESTAMP (3 milliseconds)
[info] - replacement set seed (60 milliseconds)
[info] - BlockManagerBasedBlockHandler - store blocks (855 milliseconds)
[info] - bernoulli partitioning sampling (29 milliseconds)
[info] - bernoulli partitioning sampling without iterator (39 milliseconds)
[info] - bernoulli partitioning boundary cases (2 milliseconds)
[info] - BlockManagerBasedBlockHandler - handle errors in storing block (22 milliseconds)
[info] - bernoulli partitioning (without iterator) boundary cases (5 milliseconds)
[info] - bernoulli partitioning data (2 milliseconds)
[info] - bernoulli partitioning clone (1 millisecond)
[info] PoolSuite:
[info] KinesisReceiverSuite:
[info] - process records including store and set checkpointer (6 milliseconds)
[info] - split into multiple processes if a limitation is set (3 milliseconds)
[info] - FIFO Scheduler Test (92 milliseconds)
[info] - shouldn't store and update checkpointer when receiver is stopped (3 milliseconds)
[info] - shouldn't update checkpointer when exception occurs during store (9 milliseconds)
[info] - shutdown should checkpoint if the reason is TERMINATE (7 milliseconds)
[info] - shutdown should not checkpoint if the reason is something other than TERMINATE (1 millisecond)
[info] - retry success on first attempt (2 milliseconds)
[info] - retry success on second attempt after a Kinesis throttling exception (16 milliseconds)
[info] - Fair Scheduler Test (77 milliseconds)
[info] - retry success on second attempt after a Kinesis dependency exception (55 milliseconds)
[info] - retry failed after a shutdown exception (5 milliseconds)
[info] - retry failed after an invalid state exception (4 milliseconds)
[info] - retry failed after unexpected exception (3 milliseconds)
[info] - retry failed after exhausting all retries (81 milliseconds)
[info] - Nested Pool Test (110 milliseconds)
[info] WithAggregationKinesisBackedBlockRDDSuite:
[info] - Basic reading from Kinesis [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Read data available in both block manager and Kinesis [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Read data available only in block manager, not in Kinesis [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Read data available only in Kinesis, not in block manager [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Read data available partially in block manager, rest in Kinesis [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Test isBlockValid skips block fetching from block manager [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Test whether RDD is valid after removing blocks from block manager [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - SPARK-17663: FairSchedulableBuilder sets default values for blank or invalid datas (9 milliseconds)
[info] WithoutAggregationKinesisBackedBlockRDDSuite:
[info] - Basic reading from Kinesis [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Read data available in both block manager and Kinesis [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Read data available only in block manager, not in Kinesis [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Read data available only in Kinesis, not in block manager [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Read data available partially in block manager, rest in Kinesis [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Test isBlockValid skips block fetching from block manager [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Test whether RDD is valid after removing blocks from block manager [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - FIFO scheduler uses root pool and not spark.scheduler.pool property (72 milliseconds)
[info] WithAggregationKinesisStreamSuite:
[info] - FAIR Scheduler uses default pool when spark.scheduler.pool property is not set (79 milliseconds)
[info] - FAIR Scheduler creates a new pool when spark.scheduler.pool property points to a non-existent pool (63 milliseconds)
[info] - Pool should throw IllegalArgumentException when schedulingMode is not supported (2 milliseconds)
[info] - KinesisUtils API (19 milliseconds)
[info] - Fair Scheduler should build fair scheduler when valid spark.scheduler.allocation.file property is set (67 milliseconds)
[info] - RDD generation (39 milliseconds)
[info] - basic operation [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - custom message handling [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Kinesis read with custom configurations (5 milliseconds)
[info] - split and merge shards in a stream [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - failure recovery [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Prepare KinesisTestUtils [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] WithoutAggregationKinesisStreamSuite:
[info] - Fair Scheduler should use default file(fairscheduler.xml) if it exists in classpath and spark.scheduler.allocation.file property is not set (83 milliseconds)
[info] - Fair Scheduler should throw FileNotFoundException when invalid spark.scheduler.allocation.file property is set (66 milliseconds)
[info] DiskStoreSuite:
[info] - KinesisUtils API (2 milliseconds)
[info] - RDD generation (4 milliseconds)
[info] - basic operation [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - custom message handling [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Kinesis read with custom configurations (3 milliseconds)
[info] - split and merge shards in a stream [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - failure recovery [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - Prepare KinesisTestUtils [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!!
[info] - reads of memory-mapped and non memory-mapped files are equivalent (32 milliseconds)
[info] - WriteAheadLogBasedBlockHandler - store blocks (937 milliseconds)
[info] - block size tracking (29 milliseconds)
[info] - blocks larger than 2gb (32 milliseconds)
[info] - WriteAheadLogBasedBlockHandler - handle errors in storing block (33 milliseconds)
[info] - block data encryption (53 milliseconds)
[info] Test run started
[info] Test org.apache.spark.streaming.kinesis.JavaKinesisStreamSuite.testCustomHandlerAwsStsCreds started
[info] BlockManagerReplicationSuite:
[info] - get peers with addition and removal of block managers (47 milliseconds)
[info] - WriteAheadLogBasedBlockHandler - clean old blocks (173 milliseconds)
[info] Test org.apache.spark.streaming.kinesis.JavaKinesisStreamSuite.testCustomHandlerAwsCreds started
[info] Test org.apache.spark.streaming.kinesis.JavaKinesisStreamSuite.testCustomHandler started
[info] - Test Block - count messages (426 milliseconds)
[info] Test org.apache.spark.streaming.kinesis.JavaKinesisStreamSuite.testAwsCreds started
[info] - Test Block - isFullyConsumed (52 milliseconds)
[info] Test org.apache.spark.streaming.kinesis.JavaKinesisStreamSuite.testKinesisStream started
[info] ReceivedBlockHandlerWithEncryptionSuite:
[info] Test run finished: 0 failed, 0 ignored, 5 total, 1.174s
[info] Test run started
[info] Test org.apache.spark.streaming.kinesis.JavaKinesisInputDStreamBuilderSuite.testJavaKinesisDStreamBuilderOldApi started
[info] Test org.apache.spark.streaming.kinesis.JavaKinesisInputDStreamBuilderSuite.testJavaKinesisDStreamBuilder started
[info] - block replication - 2x replication (1 second, 9 milliseconds)
[info] Test run finished: 0 failed, 0 ignored, 2 total, 0.289s
[info] - BlockManagerBasedBlockHandler - store blocks (557 milliseconds)
[info] - BlockManagerBasedBlockHandler - handle errors in storing block (10 milliseconds)
[info] ScalaTest
[info] Run completed in 10 minutes, 3 seconds.
[info] Total number of tests run: 5
[info] Suites: completed 1, aborted 0
[info] Tests: succeeded 5, failed 0, canceled 0, ignored 0, pending 0
[info] All tests passed.
[info] Passed: Total 5, Failed 0, Errors 0, Passed 5
[info] ScalaTest
[info] Run completed in 10 minutes, 2 seconds.
[info] Total number of tests run: 85
[info] Suites: completed 8, aborted 0
[info] Tests: succeeded 85, failed 0, canceled 0, ignored 0, pending 0
[info] All tests passed.
[info] Passed: Total 85, Failed 0, Errors 0, Passed 85
[info] ScalaTest
[info] Run completed in 10 minutes, 3 seconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] Passed: Total 104, Failed 0, Errors 0, Passed 103, Skipped 1
[info] ScalaTest
[info] Run completed in 9 minutes, 55 seconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] - WriteAheadLogBasedBlockHandler - store blocks (747 milliseconds)
[info] - WriteAheadLogBasedBlockHandler - handle errors in storing block (33 milliseconds)
[info] - WriteAheadLogBasedBlockHandler - clean old blocks (104 milliseconds)
[info] - block replication - 3x replication (1 second, 391 milliseconds)
[info] - Test Block - count messages (270 milliseconds)
[info] GenerateUnsafeRowJoinerBitsetSuite:
[info] - Test Block - isFullyConsumed (57 milliseconds)
[info] InputInfoTrackerSuite:
[info] - test report and get InputInfo from InputInfoTracker (1 millisecond)
[info] - test cleanup InputInfo from InputInfoTracker (1 millisecond)
[info] JobGeneratorSuite:
[info] - bitset concat: boundary size 0, 0 (875 milliseconds)
[info]   + num fields: 0 and 0 
[info]   + num fields: 0 and 0 
[info]   + num fields: 0 and 0 
[info]   + num fields: 0 and 0 
[info]   + num fields: 0 and 0 
[info] - bitset concat: boundary size 0, 64 (78 milliseconds)
[info]   + num fields: 0 and 64 
[info]   + num fields: 0 and 64 
[info]   + num fields: 0 and 64 
[info]   + num fields: 0 and 64 
[info]   + num fields: 0 and 64 
[info] - bitset concat: boundary size 64, 0 (33 milliseconds)
[info]   + num fields: 64 and 0 
[info]   + num fields: 64 and 0 
[info]   + num fields: 64 and 0 
[info]   + num fields: 64 and 0 
[info]   + num fields: 64 and 0 
[info] - bitset concat: boundary size 64, 64 (45 milliseconds)
[info]   + num fields: 64 and 64 
[info]   + num fields: 64 and 64 
[info]   + num fields: 64 and 64 
[info]   + num fields: 64 and 64 
[info]   + num fields: 64 and 64 
[info] - bitset concat: boundary size 0, 128 (45 milliseconds)
[info]   + num fields: 0 and 128 
[info]   + num fields: 0 and 128 
[info]   + num fields: 0 and 128 
[info]   + num fields: 0 and 128 
[info]   + num fields: 0 and 128 
[info] - bitset concat: boundary size 128, 0 (67 milliseconds)
[info]   + num fields: 128 and 0 
[info]   + num fields: 128 and 0 
[info]   + num fields: 128 and 0 
[info]   + num fields: 128 and 0 
[info]   + num fields: 128 and 0 
[info] - bitset concat: boundary size 128, 128 (77 milliseconds)
[info]   + num fields: 128 and 128 
[info]   + num fields: 128 and 128 
[info]   + num fields: 128 and 128 
[info]   + num fields: 128 and 128 
[info]   + num fields: 128 and 128 
[info] - bitset concat: single word bitsets (41 milliseconds)
[info]   + num fields: 10 and 5 
[info]   + num fields: 10 and 5 
[info]   + num fields: 10 and 5 
[info]   + num fields: 10 and 5 
[info]   + num fields: 10 and 5 
[info] - bitset concat: first bitset larger than a word (47 milliseconds)
[info]   + num fields: 67 and 5 
[info]   + num fields: 67 and 5 
[info]   + num fields: 67 and 5 
[info]   + num fields: 67 and 5 
[info]   + num fields: 67 and 5 
[info] - bitset concat: second bitset larger than a word (26 milliseconds)
[info]   + num fields: 6 and 67 
[info]   + num fields: 6 and 67 
[info]   + num fields: 6 and 67 
[info]   + num fields: 6 and 67 
[info]   + num fields: 6 and 67 
[info] - bitset concat: no reduction in bitset size (29 milliseconds)
[info]   + num fields: 33 and 34 
[info]   + num fields: 33 and 34 
[info]   + num fields: 33 and 34 
[info]   + num fields: 33 and 34 
[info]   + num fields: 33 and 34 
[info] - bitset concat: two words (174 milliseconds)
[info]   + num fields: 120 and 95 
[info]   + num fields: 120 and 95 
[info]   + num fields: 120 and 95 
[info]   + num fields: 120 and 95 
[info]   + num fields: 120 and 95 
[info] - bitset concat: bitset 65, 128 (82 milliseconds)
[info]   + num fields: 65 and 128 
[info]   + num fields: 65 and 128 
[info]   + num fields: 65 and 128 
[info]   + num fields: 65 and 128 
[info]   + num fields: 65 and 128 
[info] - block replication - mixed between 1x to 5x (2 seconds, 505 milliseconds)
[info] - SPARK-6222: Do not clear received block data too soon (2 seconds, 685 milliseconds)
[info] - block replication - off-heap (431 milliseconds)
[info] ReceivedBlockTrackerSuite:
[info] - block addition, and block to batch allocation (5 milliseconds)
[info] - block replication - 2x replication without peers (1 millisecond)
[info] - block replication - replication failures (79 milliseconds)
[info] - block replication - addition and deletion of block managers (552 milliseconds)
[info] BlockManagerProactiveReplicationSuite:
[info] - get peers with addition and removal of block managers (28 milliseconds)
[info] - bitset concat: randomized tests (4 seconds, 77 milliseconds)
[info]   + num fields: 623 and 6 
[info]   + num fields: 820 and 643 
[info]   + num fields: 597 and 902 
[info]   + num fields: 348 and 744 
[info]   + num fields: 585 and 550 
[info]   + num fields: 12 and 438 
[info]   + num fields: 901 and 888 
[info]   + num fields: 867 and 96 
[info]   + num fields: 304 and 809 
[info]   + num fields: 189 and 205 
[info]   + num fields: 22 and 549 
[info]   + num fields: 950 and 915 
[info]   + num fields: 35 and 819 
[info]   + num fields: 476 and 504 
[info]   + num fields: 201 and 750 
[info]   + num fields: 660 and 944 
[info]   + num fields: 990 and 134 
[info]   + num fields: 783 and 596 
[info]   + num fields: 19 and 395 
[info] ConstraintPropagationSuite:
[info] - propagating constraints in filters (297 milliseconds)
[info] - propagating constraints in aggregate (60 milliseconds)
[info] - propagating constraints in expand (190 milliseconds)
[info] - propagating constraints in aliases (81 milliseconds)
[info] - propagating constraints in union (40 milliseconds)
[info] - propagating constraints in intersect (9 milliseconds)
[info] - propagating constraints in except (8 milliseconds)
[info] - block replication - 2x replication (1 second, 895 milliseconds)
[info] - propagating constraints in inner join (21 milliseconds)
[info] - propagating constraints in left-semi join (7 milliseconds)
[info] - propagating constraints in left-outer join (8 milliseconds)
[info] - propagating constraints in right-outer join (8 milliseconds)
[info] - propagating constraints in full-outer join (8 milliseconds)
[info] - infer additional constraints in filters (7 milliseconds)
[info] - infer constraints on cast (60 milliseconds)
[info] - infer isnotnull constraints from compound expressions (78 milliseconds)
[info] - infer IsNotNull constraints from non-nullable attributes (3 milliseconds)
[info] - not infer non-deterministic constraints (16 milliseconds)
[info] - enable/disable constraint propagation (20 milliseconds)
[info] BufferHolderSparkSubmitSuite:
[info] - yarn-cluster should respect conf overrides in SparkHadoopUtil (SPARK-16414, SPARK-23630) (21 seconds, 41 milliseconds)
[info] - block replication - 3x replication (2 seconds, 591 milliseconds)
[info] - SPARK-22222: Buffer holder should be able to allocate memory larger than 1GB (4 seconds, 680 milliseconds)
[info] CombiningLimitsSuite:
[info] - limits: combines two limits (43 milliseconds)
[info] - limits: combines three limits (13 milliseconds)
[info] - limits: combines two limits after ColumnPruning (11 milliseconds)
[info] EliminateSerializationSuite:
[info] - block replication - mixed between 1x to 5x (2 seconds, 532 milliseconds)
[info] - block replication - off-heap (439 milliseconds)
[info] - block replication - 2x replication without peers (0 milliseconds)
[info] - block replication - replication failures (77 milliseconds)
[info] - block replication - addition and deletion of block managers (448 milliseconds)
[info] - back to back serialization (1 second, 759 milliseconds)
[info] - back to back serialization with object change (114 milliseconds)
[info] - proactive block replication - 2 replicas - 1 block manager deletions (126 milliseconds)
[info] - back to back serialization in AppendColumns (151 milliseconds)
[info] - back to back serialization in AppendColumns with object change (57 milliseconds)
[info] InferFiltersFromConstraintsSuite:
[info] - filter: filter out constraints in condition (25 milliseconds)
[info] - single inner join: filter out values on either side on equi-join keys (39 milliseconds)
[info] - single inner join: filter out nulls on either side on non equal keys (29 milliseconds)
[info] - single inner join with pre-existing filters: filter out values on either side (32 milliseconds)
[info] - single outer join: no null filters are generated (15 milliseconds)
[info] - proactive block replication - 3 replicas - 2 block manager deletions (159 milliseconds)
[info] - multiple inner joins: filter out values on all sides on equi-join keys (73 milliseconds)
[info] - inner join with filter: filter out values on all sides on equi-join keys (20 milliseconds)
[info] - inner join with alias: alias contains multiple attributes (42 milliseconds)
[info] - inner join with alias: alias contains single attributes (32 milliseconds)
[info] - generate correct filters for alias that don't produce recursive constraints (11 milliseconds)
[info] - No inferred filter when constraint propagation is disabled (5 milliseconds)
[info] - constraints should be inferred from aliased literals (42 milliseconds)
[info] - SPARK-23405: left-semi equal-join should filter out null join keys on both sides (16 milliseconds)
[info] - SPARK-21479: Outer join after-join filters push down to null-supplying side (24 milliseconds)
[info] - SPARK-21479: Outer join pre-existing filters push down to null-supplying side (32 milliseconds)
[info] - SPARK-21479: Outer join no filter push down to preserved side (18 milliseconds)
[info] - SPARK-23564: left anti join should filter out null join keys on right side (13 milliseconds)
[info] - SPARK-23564: left outer join should filter out null join keys on right side (12 milliseconds)
[info] - SPARK-23564: right outer join should filter out null join keys on left side (16 milliseconds)
[info] RewriteDistinctAggregatesSuite:
[info] - single distinct group (19 milliseconds)
[info] - single distinct group with partial aggregates (10 milliseconds)
[info] - multiple distinct groups (19 milliseconds)
[info] - multiple distinct groups with partial aggregates (14 milliseconds)
[info] - multiple distinct groups with non-partial aggregates (15 milliseconds)
[info] NullExpressionsSuite:
[info] - proactive block replication - 4 replicas - 3 block manager deletions (265 milliseconds)
[info] - proactive block replication - 5 replicas - 4 block manager deletions (227 milliseconds)
[info] BlockManagerBasicStrategyReplicationSuite:
[info] - get peers with addition and removal of block managers (28 milliseconds)
[info] - isnull and isnotnull (1 second, 8 milliseconds)
[info] - AssertNotNUll (2 milliseconds)
[info] - IsNaN (290 milliseconds)
[info] - block replication - 2x replication (654 milliseconds)
[info] - nanvl (299 milliseconds)
[info] - block replication - 3x replication (1 second, 533 milliseconds)
[info] - coalesce (2 seconds, 144 milliseconds)
[info] - SPARK-16602 Nvl should support numeric-string cases (41 milliseconds)
[info] - block addition, and block to batch allocation with many blocks (15 seconds, 122 milliseconds)
[info] - recovery with write ahead logs should remove only allocated blocks from received queue (23 milliseconds)
[info] - AtLeastNNonNulls (244 milliseconds)
[info] - block allocation to batch should not loose blocks from received queue (244 milliseconds)
[info] - recovery and cleanup with write ahead logs (62 milliseconds)
[info] - disable write ahead log when checkpoint directory is not set (1 millisecond)
[info] - parallel file deletion in FileBasedWriteAheadLog is robust to deletion error (54 milliseconds)
[info] WindowOperationsSuite:
[info] - window - basic window (792 milliseconds)
[info] - window - tumbling window (472 milliseconds)
[info] - Coalesce should not throw 64kb exception (1 second, 564 milliseconds)
[info] - SPARK-22705: Coalesce should use less global variables (3 milliseconds)
[info] - block replication - mixed between 1x to 5x (2 seconds, 642 milliseconds)
[info] - window - larger window (596 milliseconds)
[info] - block replication - off-heap (393 milliseconds)
[info] - block replication - 2x replication without peers (1 millisecond)
[info] - window - non-overlapping window (362 milliseconds)
[info] - window - persistence level (61 milliseconds)
[info] - block replication - replication failures (58 milliseconds)
[info] - AtLeastNNonNulls should not throw 64kb exception (1 second, 264 milliseconds)
[info] AttributeSetSuite:
[info] - sanity check (1 millisecond)
[info] - checks by id not name (1 millisecond)
[info] - ++ preserves AttributeSet (1 millisecond)
[info] - extracts all references  (1 millisecond)
[info] - dedups attributes (1 millisecond)
[info] - subset (0 milliseconds)
[info] - equality (1 millisecond)
[info] - SPARK-18394 keep a deterministic output order along with attribute names and exprIds (3 milliseconds)
[info] - block replication - addition and deletion of block managers (323 milliseconds)
[info] - reduceByKeyAndWindow - basic reduction (531 milliseconds)
[info] FlatmapIteratorSuite:
[info] - Flatmap Iterator to Disk (148 milliseconds)
[info] - Flatmap Iterator to Memory (96 milliseconds)
[info] - reduceByKeyAndWindow - key already in window and new value added into window (389 milliseconds)
[info] - Serializer Reset (112 milliseconds)
[info] RDDSuite:
[info] ResolveInlineTablesSuite:
[info] - validate inputs are foldable (5 milliseconds)
[info] - validate input dimensions (3 milliseconds)
[info] - do not fire the rule if not all expressions are resolved (2 milliseconds)
[info] - convert (7 milliseconds)
[info] - convert TimeZoneAwareExpression (3 milliseconds)
[info] - nullability inference in convert (3 milliseconds)
[info] SortOrderExpressionsSuite:
[info] - reduceByKeyAndWindow - new key added into window (329 milliseconds)
[info] - reduceByKeyAndWindow - key removed from window (449 milliseconds)
[info] - SortPrefix (640 milliseconds)
[info] CheckCartesianProductsSuite:
[info] - basic operations (624 milliseconds)
[info] - serialization (2 milliseconds)
[info] - CheckCartesianProducts doesn't throw an exception if cross joins are enabled) (20 milliseconds)
[info] - CheckCartesianProducts throws an exception for join types that require a join condition (13 milliseconds)
[info] - CheckCartesianProducts doesn't throw an exception if a join condition is present (13 milliseconds)
[info] - CheckCartesianProducts doesn't throw an exception if join types don't require conditions (6 milliseconds)
[info] RuleExecutorSuite:
[info] - only once (3 milliseconds)
[info] - to fixed point (1 millisecond)
[info] - to maxIterations (2 milliseconds)
[info] - structural integrity checker (2 milliseconds)
[info] - countApproxDistinct (99 milliseconds)
[info] - SparkContext.union (48 milliseconds)
[info] TypeCoercionSuite:
[info] - implicit type cast - ByteType (7 milliseconds)
[info] - implicit type cast - ShortType (2 milliseconds)
[info] - implicit type cast - IntegerType (1 millisecond)
[info] - implicit type cast - LongType (1 millisecond)
[info] - implicit type cast - FloatType (2 milliseconds)
[info] - implicit type cast - DoubleType (1 millisecond)
[info] - implicit type cast - DecimalType(10, 2) (1 millisecond)
[info] - implicit type cast - BinaryType (1 millisecond)
[info] - implicit type cast - BooleanType (1 millisecond)
[info] - SparkContext.union parallel partition listing (98 milliseconds)
[info] - SparkContext.union creates UnionRDD if at least one RDD has no partitioner (4 milliseconds)
[info] - SparkContext.union creates PartitionAwareUnionRDD if all RDDs have partitioners (5 milliseconds)
[info] - PartitionAwareUnionRDD raises exception if at least one RDD has no partitioner (3 milliseconds)
[info] - SPARK-23778: empty RDD in union should not produce a UnionRDD (6 milliseconds)
[info] - reduceByKeyAndWindow - larger slide time (455 milliseconds)
[info] - implicit type cast - StringType (174 milliseconds)
[info] - implicit type cast - DateType (1 millisecond)
[info] - implicit type cast - TimestampType (2 milliseconds)
[info] - implicit type cast - ArrayType(StringType) (10 milliseconds)
[info] - implicit type cast between two Map types (10 milliseconds)
[info] - implicit type cast - StructType().add("a1", StringType) (6 milliseconds)
[info] - implicit type cast - NullType (1 millisecond)
[info] - implicit type cast - CalendarIntervalType (1 millisecond)
[info] - eligible implicit type cast - TypeCollection (2 milliseconds)
[info] - ineligible implicit type cast - TypeCollection (0 milliseconds)
[info] - tightest common bound for types (5 milliseconds)
[info] - wider common type for decimal and array (7 milliseconds)
[info] - cast NullType for expressions that implement ExpectsInputTypes (3 milliseconds)
[info] - cast NullType for binary operators (3 milliseconds)
[info] - coalesce casts (15 milliseconds)
[info] - CreateArray casts (5 milliseconds)
[info] - partitioner aware union (211 milliseconds)
[info] - UnionRDD partition serialized size should be small (5 milliseconds)
[info] - CreateMap casts (9 milliseconds)
[info] - greatest/least cast (11 milliseconds)
[info] - fold (13 milliseconds)
[info] - nanvl casts (4 milliseconds)
[info] - type coercion for If (8 milliseconds)
[info] - fold with op modifying first arg (15 milliseconds)
[info] - type coercion for CaseKeyWhen (9 milliseconds)
[info] - type coercion for Stack (8 milliseconds)
[info] - aggregate (16 milliseconds)
[info] - type coercion for Concat (10 milliseconds)
[info] - type coercion for Elt (12 milliseconds)
[info] - BooleanEquality type cast (5 milliseconds)
[info] - BooleanEquality simplification (6 milliseconds)
[info] - WidenSetOperationTypes for except and intersect (15 milliseconds)
[info] - WidenSetOperationTypes for union (3 milliseconds)
[info] - Transform Decimal precision/scale for union except and intersect (15 milliseconds)
[info] - rule for date/timestamp operations (9 milliseconds)
[info] - make sure rules do not fire early (5 milliseconds)
[info] - SPARK-15776 Divide expression's dataType should be casted to Double or Decimal in aggregation function like sum (4 milliseconds)
[info] - SPARK-17117 null type coercion in divide (2 milliseconds)
[info] - binary comparison with string promotion (8 milliseconds)
[info] - cast WindowFrame boundaries to the type they operate upon (6 milliseconds)
[info] EncoderErrorMessageSuite:
[info] - primitive types in encoders using Kryo serialization (5 milliseconds)
[info] - primitive types in encoders using Java serialization (1 millisecond)
[info] - nice error message for missing encoder (91 milliseconds)
[info] RegexpExpressionsSuite:
[info] - reduceByKeyAndWindow - big test (665 milliseconds)
[info] - treeAggregate (963 milliseconds)
[info] - reduceByKeyAndWindow with inverse function - basic reduction (546 milliseconds)
[info] - reduceByKeyAndWindow with inverse function - key already in window and new value added into window (344 milliseconds)
[info] - reduceByKeyAndWindow with inverse function - new key added into window (350 milliseconds)
[info] - treeAggregate with ops modifying first args (891 milliseconds)
[info] - LIKE Pattern (1 second, 816 milliseconds)
[info] - treeReduce (311 milliseconds)
[info] - reduceByKeyAndWindow with inverse function - key removed from window (441 milliseconds)
[info] - basic caching (54 milliseconds)
[info] - caching with failures (18 milliseconds)
[info] - empty RDD (178 milliseconds)
[info] - reduceByKeyAndWindow with inverse function - larger slide time (415 milliseconds)
[info] - RLIKE Regular Expression (812 milliseconds)
[info] - RegexReplace (157 milliseconds)
[info] - SPARK-22570: RegExpReplace should not create a lot of global variables (1 millisecond)
[info] - RegexExtract (199 milliseconds)
[info] - reduceByKeyAndWindow with inverse function - big test (638 milliseconds)
[info] - repartitioned RDDs (902 milliseconds)
[info] - SPLIT (123 milliseconds)
[info] NondeterministicSuite:
[info] - MonotonicallyIncreasingID (31 milliseconds)
[info] - SparkPartitionID (34 milliseconds)
[info] - InputFileName (33 milliseconds)
[info] EncoderResolutionSuite:
[info] - real type doesn't match encoder schema but they are compatible: product (63 milliseconds)
[info] - real type doesn't match encoder schema but they are compatible: nested product (94 milliseconds)
[info] - real type doesn't match encoder schema but they are compatible: tupled encoder (48 milliseconds)
[info] - real type doesn't match encoder schema but they are compatible: primitive array (71 milliseconds)
[info] - the real type is not compatible with encoder schema: primitive array (22 milliseconds)
[info] - reduceByKeyAndWindow with inverse and filter functions - big test (699 milliseconds)
[info] - real type doesn't match encoder schema but they are compatible: array (301 milliseconds)
[info] - real type doesn't match encoder schema but they are compatible: nested array (99 milliseconds)
[info] - the real type is not compatible with encoder schema: non-array field (27 milliseconds)
[info] - the real type is not compatible with encoder schema: array element type (49 milliseconds)
[info] - the real type is not compatible with encoder schema: nested array element type (94 milliseconds)
[info] - nullability of array type element should not fail analysis (23 milliseconds)
[info] - the real number of fields doesn't match encoder schema: tuple encoder (14 milliseconds)
[info] - the real number of fields doesn't match encoder schema: nested tuple encoder (29 milliseconds)
[info] - nested case class can have different number of fields from the real schema (24 milliseconds)
[info] - throw exception if real type is not compatible with encoder schema (36 milliseconds)
[info] - cast from int to Long should success (2 milliseconds)
[info] - cast from date to java.sql.Timestamp should success (1 millisecond)
[info] - cast from bigint to String should success (1 millisecond)
[info] - cast from int to java.math.BigDecimal should success (1 millisecond)
[info] - cast from bigint to java.math.BigDecimal should success (2 milliseconds)
[info] - cast from bigint to Int should fail (1 millisecond)
[info] - cast from timestamp to java.sql.Date should fail (1 millisecond)
[info] - cast from decimal(38,18) to Double should fail (0 milliseconds)
[info] - cast from double to java.math.BigDecimal should fail (0 milliseconds)
[info] - cast from decimal(38,18) to Int should fail (1 millisecond)
[info] - cast from string to Long should fail (1 millisecond)
[info] - groupByKeyAndWindow (673 milliseconds)
[info] - countByWindow (937 milliseconds)
[info] ExpressionEncoderSuite:
[info] - encode/decode for primitive boolean: false (codegen path) (134 milliseconds)
[info] - encode/decode for primitive boolean: false (interpreted path) (71 milliseconds)
[info] - encode/decode for primitive byte: -3 (codegen path) (19 milliseconds)
[info] - encode/decode for primitive byte: -3 (interpreted path) (5 milliseconds)
[info] - encode/decode for primitive short: -3 (codegen path) (17 milliseconds)
[info] - encode/decode for primitive short: -3 (interpreted path) (5 milliseconds)
[info] - encode/decode for primitive int: -3 (codegen path) (19 milliseconds)
[info] - encode/decode for primitive int: -3 (interpreted path) (4 milliseconds)
[info] - encode/decode for primitive long: -3 (codegen path) (73 milliseconds)
[info] - encode/decode for primitive long: -3 (interpreted path) (6 milliseconds)
[info] - countByValueAndWindow (773 milliseconds)
[info] - encode/decode for primitive float: -3.7 (codegen path) (93 milliseconds)
[info] StreamingListenerSuite:
[info] - encode/decode for primitive float: -3.7 (interpreted path) (16 milliseconds)
[info] - encode/decode for primitive double: -3.7 (codegen path) (22 milliseconds)
[info] - encode/decode for primitive double: -3.7 (interpreted path) (4 milliseconds)
[info] - encode/decode for boxed boolean: false (codegen path) (19 milliseconds)
[info] - encode/decode for boxed boolean: false (interpreted path) (22 milliseconds)
[info] - encode/decode for boxed byte: -3 (codegen path) (18 milliseconds)
[info] - encode/decode for boxed byte: -3 (interpreted path) (5 milliseconds)
[info] - encode/decode for boxed short: -3 (codegen path) (18 milliseconds)
[info] - encode/decode for boxed short: -3 (interpreted path) (5 milliseconds)
[info] - encode/decode for boxed int: -3 (codegen path) (17 milliseconds)
[info] - encode/decode for boxed int: -3 (interpreted path) (5 milliseconds)
[info] - encode/decode for boxed long: -3 (codegen path) (16 milliseconds)
[info] - encode/decode for boxed long: -3 (interpreted path) (4 milliseconds)
[info] - encode/decode for boxed float: -3.7 (codegen path) (14 milliseconds)
[info] - encode/decode for boxed float: -3.7 (interpreted path) (4 milliseconds)
[info] - encode/decode for boxed double: -3.7 (codegen path) (20 milliseconds)
[info] - encode/decode for boxed double: -3.7 (interpreted path) (4 milliseconds)
[info] - encode/decode for scala decimal: 32131413.211321313 (codegen path) (20 milliseconds)
[info] - encode/decode for scala decimal: 32131413.211321313 (interpreted path) (4 milliseconds)
[info] - encode/decode for java decimal: 231341.23123 (codegen path) (17 milliseconds)
[info] - encode/decode for java decimal: 231341.23123 (interpreted path) (4 milliseconds)
[info] - encode/decode for scala biginteger: 23134123123 (codegen path) (15 milliseconds)
[info] - encode/decode for scala biginteger: 23134123123 (interpreted path) (3 milliseconds)
[info] - encode/decode for java BigInteger: 23134123123 (codegen path) (15 milliseconds)
[info] - encode/decode for java BigInteger: 23134123123 (interpreted path) (3 milliseconds)
[info] - encode/decode for catalyst decimal: 32131413.211321313 (codegen path) (10 milliseconds)
[info] - encode/decode for catalyst decimal: 32131413.211321313 (interpreted path) (3 milliseconds)
[info] - encode/decode for string: hello (codegen path) (19 milliseconds)
[info] - encode/decode for string: hello (interpreted path) (6 milliseconds)
[info] - encode/decode for date: 2012-12-23 (codegen path) (27 milliseconds)
[info] - encode/decode for date: 2012-12-23 (interpreted path) (6 milliseconds)
[info] - encode/decode for timestamp: 2016-01-29 10:00:00.0 (codegen path) (38 milliseconds)
[info] - encode/decode for timestamp: 2016-01-29 10:00:00.0 (interpreted path) (9 milliseconds)
[info] - encode/decode for array of timestamp: [Ljava.sql.Timestamp;@5bac8d5d (codegen path) (49 milliseconds)
[info] - encode/decode for array of timestamp: [Ljava.sql.Timestamp;@5bac8d5d (interpreted path) (338 milliseconds)
[info] - encode/decode for binary: [B@440db77a (codegen path) (22 milliseconds)
[info] - encode/decode for binary: [B@440db77a (interpreted path) (5 milliseconds)
[info] - encode/decode for seq of int: List(31, -123, 4) (codegen path) (105 milliseconds)
[info] - encode/decode for seq of int: List(31, -123, 4) (interpreted path) (17 milliseconds)
[info] - encode/decode for seq of string: List(abc, xyz) (codegen path) (34 milliseconds)
[info] - encode/decode for seq of string: List(abc, xyz) (interpreted path) (23 milliseconds)
[info] - encode/decode for seq of string with null: List(abc, null, xyz) (codegen path) (41 milliseconds)
[info] - batch info reporting (1 second, 71 milliseconds)
[info] - run Spark in yarn-client mode with additional jar (22 seconds, 53 milliseconds)
[info] - encode/decode for seq of string with null: List(abc, null, xyz) (interpreted path) (24 milliseconds)
[info] - encode/decode for empty seq of int: List() (codegen path) (91 milliseconds)
[info] - encode/decode for empty seq of int: List() (interpreted path) (82 milliseconds)
[info] - encode/decode for empty seq of string: List() (codegen path) (115 milliseconds)
[info] - encode/decode for empty seq of string: List() (interpreted path) (23 milliseconds)
[info] - encode/decode for seq of seq of int: List(List(31, -123), null, List(4, 67)) (codegen path) (114 milliseconds)
[info] - encode/decode for seq of seq of int: List(List(31, -123), null, List(4, 67)) (interpreted path) (43 milliseconds)
[info] - receiver info reporting (509 milliseconds)
[info] - encode/decode for seq of seq of string: List(List(abc, xyz), List(null), null, List(1, null, 2)) (codegen path) (131 milliseconds)
[info] - encode/decode for seq of seq of string: List(List(abc, xyz), List(null), null, List(1, null, 2)) (interpreted path) (118 milliseconds)
[info] - encode/decode for array of int: [I@7b444736 (codegen path) (76 milliseconds)
[info] - encode/decode for array of int: [I@7b444736 (interpreted path) (80 milliseconds)
[info] - encode/decode for array of string: [Ljava.lang.String;@39d2eca8 (codegen path) (27 milliseconds)
[info] - encode/decode for array of string: [Ljava.lang.String;@39d2eca8 (interpreted path) (14 milliseconds)
[info] - encode/decode for array of string with null: [Ljava.lang.String;@5e3b95b5 (codegen path) (256 milliseconds)
[info] - encode/decode for array of string with null: [Ljava.lang.String;@5e3b95b5 (interpreted path) (19 milliseconds)
[info] - encode/decode for empty array of int: [I@3fb414f5 (codegen path) (96 milliseconds)
[info] - output operation reporting (801 milliseconds)
[info] - encode/decode for empty array of int: [I@3fb414f5 (interpreted path) (17 milliseconds)
[info] - encode/decode for empty array of string: [Ljava.lang.String;@6a7256cc (codegen path) (31 milliseconds)
[info] - encode/decode for empty array of string: [Ljava.lang.String;@6a7256cc (interpreted path) (64 milliseconds)
[info] - encode/decode for array of array of int: [[I@68f5476 (codegen path) (46 milliseconds)
[info] - encode/decode for array of array of int: [[I@68f5476 (interpreted path) (26 milliseconds)
[info] - encode/decode for array of array of string: [[Ljava.lang.String;@62a8d83c (codegen path) (190 milliseconds)
[info] - encode/decode for array of array of string: [[Ljava.lang.String;@62a8d83c (interpreted path) (21 milliseconds)
[info] - encode/decode for map: Map(1 -> a, 2 -> b) (codegen path) (114 milliseconds)
[info] - encode/decode for map: Map(1 -> a, 2 -> b) (interpreted path) (11 milliseconds)
[info] - encode/decode for map with null: Map(1 -> a, 2 -> null) (codegen path) (89 milliseconds)
[info] - encode/decode for map with null: Map(1 -> a, 2 -> null) (interpreted path) (6 milliseconds)
[info] - encode/decode for map of map: Map(1 -> Map(a -> 1), 2 -> Map(b -> 2)) (codegen path) (42 milliseconds)
[info] - encode/decode for map of map: Map(1 -> Map(a -> 1), 2 -> Map(b -> 2)) (interpreted path) (8 milliseconds)
[info] - encode/decode for null seq in tuple: (null) (codegen path) (97 milliseconds)
[info] - don't call ssc.stop in listener (773 milliseconds)
[info] - encode/decode for null seq in tuple: (null) (interpreted path) (88 milliseconds)
[info] - encode/decode for null map in tuple: (null) (codegen path) (32 milliseconds)
[info] - encode/decode for null map in tuple: (null) (interpreted path) (7 milliseconds)
[info] - encode/decode for list of int: List(1, 2) (codegen path) (24 milliseconds)
[info] - encode/decode for list of int: List(1, 2) (interpreted path) (16 milliseconds)
[info] - encode/decode for list with String and null: List(a, null) (codegen path) (39 milliseconds)
[info] - encode/decode for list with String and null: List(a, null) (interpreted path) (15 milliseconds)
[info] - encode/decode for udt with case class: UDTCaseClass(http://spark.apache.org/) (codegen path) (18 milliseconds)
[info] - encode/decode for udt with case class: UDTCaseClass(http://spark.apache.org/) (interpreted path) (5 milliseconds)
[info] - encode/decode for kryo string: hello (codegen path) (223 milliseconds)
[info] - encode/decode for kryo string: hello (interpreted path) (14 milliseconds)
[info] - encode/decode for kryo object: org.apache.spark.sql.catalyst.encoders.KryoSerializable@f (codegen path) (90 milliseconds)
[info] - encode/decode for kryo object: org.apache.spark.sql.catalyst.encoders.KryoSerializable@f (interpreted path) (17 milliseconds)
[info] - encode/decode for java string: hello (codegen path) (104 milliseconds)
[info] - encode/decode for java string: hello (interpreted path) (5 milliseconds)
[info] - encode/decode for java object: org.apache.spark.sql.catalyst.encoders.JavaSerializable@f (codegen path) (20 milliseconds)
[info] - encode/decode for java object: org.apache.spark.sql.catalyst.encoders.JavaSerializable@f (interpreted path) (7 milliseconds)
[info] - encode/decode for InnerClass: InnerClass(1) (codegen path) (61 milliseconds)
[info] - encode/decode for InnerClass: InnerClass(1) (interpreted path) (9 milliseconds)
[info] - encode/decode for array of inner class: [Lorg.apache.spark.sql.catalyst.encoders.ExpressionEncoderSuite$InnerClass;@4f6c6e22 (codegen path) (126 milliseconds)
[info] - onBatchCompleted with successful batch (945 milliseconds)
[info] - encode/decode for array of inner class: [Lorg.apache.spark.sql.catalyst.encoders.ExpressionEncoderSuite$InnerClass;@4f6c6e22 (interpreted path) (35 milliseconds)
[info] - encode/decode for array of optional inner class: [Lscala.Option;@46189dfc (codegen path) (98 milliseconds)
[info] - encode/decode for array of optional inner class: [Lscala.Option;@46189dfc (interpreted path) (42 milliseconds)
[info] - encode/decode for PrimitiveData: PrimitiveData(1,1,1.0,1.0,1,1,true) (codegen path) (33 milliseconds)
[info] - encode/decode for PrimitiveData: PrimitiveData(1,1,1.0,1.0,1,1,true) (interpreted path) (10 milliseconds)
[info] - encode/decode for OptionalData: OptionalData(Some(2),Some(2),Some(2.0),Some(2.0),Some(2),Some(2),Some(true),Some(PrimitiveData(1,1,1.0,1.0,1,1,true))) (codegen path) (79 milliseconds)
[info] - encode/decode for OptionalData: OptionalData(Some(2),Some(2),Some(2.0),Some(2.0),Some(2),Some(2),Some(true),Some(PrimitiveData(1,1,1.0,1.0,1,1,true))) (interpreted path) (23 milliseconds)
[info] - encode/decode for OptionalData: OptionalData(None,None,None,None,None,None,None,None) (codegen path) (31 milliseconds)
[info] - encode/decode for OptionalData: OptionalData(None,None,None,None,None,None,None,None) (interpreted path) (19 milliseconds)
[info] - encode/decode for Option in array: List(Some(1), None) (codegen path) (27 milliseconds)
[info] - encode/decode for Option in array: List(Some(1), None) (interpreted path) (51 milliseconds)
[info] - encode/decode for Option in map: Map(1 -> Some(10), 2 -> Some(20), 3 -> None) (codegen path) (33 milliseconds)
[info] - encode/decode for Option in map: Map(1 -> Some(10), 2 -> Some(20), 3 -> None) (interpreted path) (5 milliseconds)
[info] - encode/decode for BoxedData: BoxedData(1,1,1.0,1.0,1,1,true) (codegen path) (62 milliseconds)
[info] - encode/decode for BoxedData: BoxedData(1,1,1.0,1.0,1,1,true) (interpreted path) (12 milliseconds)
[info] - encode/decode for BoxedData: BoxedData(null,null,null,null,null,null,null) (codegen path) (12 milliseconds)
[info] - encode/decode for BoxedData: BoxedData(null,null,null,null,null,null,null) (interpreted path) (10 milliseconds)
[info] - encode/decode for RepeatedStruct: RepeatedStruct(List(PrimitiveData(1,1,1.0,1.0,1,1,true))) (codegen path) (65 milliseconds)
[info] - encode/decode for RepeatedStruct: RepeatedStruct(List(PrimitiveData(1,1,1.0,1.0,1,1,true))) (interpreted path) (49 milliseconds)
[info] - encode/decode for Tuple3: (1,test,PrimitiveData(1,1,1.0,1.0,1,1,true)) (codegen path) (50 milliseconds)
[info] - encode/decode for Tuple3: (1,test,PrimitiveData(1,1,1.0,1.0,1,1,true)) (interpreted path) (12 milliseconds)
[info] - encode/decode for RepeatedData: RepeatedData(List(1, 2),List(1, null, 2),Map(1 -> 2),Map(1 -> null),PrimitiveData(1,1,1.0,1.0,1,1,true)) (codegen path) (70 milliseconds)
[info] - encode/decode for RepeatedData: RepeatedData(List(1, 2),List(1, null, 2),Map(1 -> 2),Map(1 -> null),PrimitiveData(1,1,1.0,1.0,1,1,true)) (interpreted path) (41 milliseconds)
[info] - encode/decode for NestedArray: NestedArray([[I@6da22dec) (codegen path) (32 milliseconds)
[info] - encode/decode for NestedArray: NestedArray([[I@6da22dec) (interpreted path) (63 milliseconds)
[info] - onBatchCompleted with failed batch and one failed job (1 second, 70 milliseconds)
[info] - encode/decode for Tuple2: (Seq[(String, String)],List((a,b))) (codegen path) (112 milliseconds)
[info] - encode/decode for Tuple2: (Seq[(String, String)],List((a,b))) (interpreted path) (90 milliseconds)
[info] - encode/decode for Tuple2: (Seq[(Int, Int)],List((1,2))) (codegen path) (1 second, 116 milliseconds)
[info] - encode/decode for Tuple2: (Seq[(Int, Int)],List((1,2))) (interpreted path) (194 milliseconds)
[info] - encode/decode for Tuple2: (Seq[(Long, Long)],List((1,2))) (codegen path) (48 milliseconds)
[info] - encode/decode for Tuple2: (Seq[(Long, Long)],List((1,2))) (interpreted path) (29 milliseconds)
[info] - encode/decode for Tuple2: (Seq[(Float, Float)],List((1.0,2.0))) (codegen path) (52 milliseconds)
[info] - encode/decode for Tuple2: (Seq[(Float, Float)],List((1.0,2.0))) (interpreted path) (28 milliseconds)
[info] - encode/decode for Tuple2: (Seq[(Double, Double)],List((1.0,2.0))) (codegen path) (44 milliseconds)
[info] - encode/decode for Tuple2: (Seq[(Double, Double)],List((1.0,2.0))) (interpreted path) (136 milliseconds)
[info] - encode/decode for Tuple2: (Seq[(Short, Short)],List((1,2))) (codegen path) (45 milliseconds)
[info] - encode/decode for Tuple2: (Seq[(Short, Short)],List((1,2))) (interpreted path) (31 milliseconds)
[info] - encode/decode for Tuple2: (Seq[(Byte, Byte)],List((1,2))) (codegen path) (56 milliseconds)
[info] - encode/decode for Tuple2: (Seq[(Byte, Byte)],List((1,2))) (interpreted path) (98 milliseconds)
[info] - onBatchCompleted with failed batch and multiple failed jobs (2 seconds, 59 milliseconds)
[info] - encode/decode for Tuple2: (Seq[(Boolean, Boolean)],List((true,false))) (codegen path) (111 milliseconds)
[info] - encode/decode for Tuple2: (Seq[(Boolean, Boolean)],List((true,false))) (interpreted path) (41 milliseconds)
[info] - encode/decode for Tuple2: (ArrayBuffer[(String, String)],ArrayBuffer((a,b))) (codegen path) (67 milliseconds)
[info] - encode/decode for Tuple2: (ArrayBuffer[(String, String)],ArrayBuffer((a,b))) (interpreted path) (185 milliseconds)
[info] - encode/decode for Tuple2: (ArrayBuffer[(Int, Int)],ArrayBuffer((1,2))) (codegen path) (119 milliseconds)
[info] - encode/decode for Tuple2: (ArrayBuffer[(Int, Int)],ArrayBuffer((1,2))) (interpreted path) (135 milliseconds)
[info] - encode/decode for Tuple2: (ArrayBuffer[(Long, Long)],ArrayBuffer((1,2))) (codegen path) (233 milliseconds)
[info] - StreamingListener receives no events after stopping StreamingListenerBus (833 milliseconds)
[info] ReceiverInputDStreamSuite:
[info] - encode/decode for Tuple2: (ArrayBuffer[(Long, Long)],ArrayBuffer((1,2))) (interpreted path) (37 milliseconds)
[info] - encode/decode for Tuple2: (ArrayBuffer[(Float, Float)],ArrayBuffer((1.0,2.0))) (codegen path) (54 milliseconds)
[info] - encode/decode for Tuple2: (ArrayBuffer[(Float, Float)],ArrayBuffer((1.0,2.0))) (interpreted path) (33 milliseconds)
[info] - Without WAL enabled: createBlockRDD creates empty BlockRDD when no block info (92 milliseconds)
[info] - encode/decode for Tuple2: (ArrayBuffer[(Double, Double)],ArrayBuffer((1.0,2.0))) (codegen path) (137 milliseconds)
[info] - encode/decode for Tuple2: (ArrayBuffer[(Double, Double)],ArrayBuffer((1.0,2.0))) (interpreted path) (188 milliseconds)
[info] - Without WAL enabled: createBlockRDD creates correct BlockRDD with block info (342 milliseconds)
[info] - encode/decode for Tuple2: (ArrayBuffer[(Short, Short)],ArrayBuffer((1,2))) (codegen path) (54 milliseconds)
[info] - encode/decode for Tuple2: (ArrayBuffer[(Short, Short)],ArrayBuffer((1,2))) (interpreted path) (38 milliseconds)
[info] - Without WAL enabled: createBlockRDD filters non-existent blocks before creating BlockRDD (74 milliseconds)
[info] - encode/decode for Tuple2: (ArrayBuffer[(Byte, Byte)],ArrayBuffer((1,2))) (codegen path) (52 milliseconds)
[info] - With WAL enabled: createBlockRDD creates empty WALBackedBlockRDD when no block info (79 milliseconds)
[info] - encode/decode for Tuple2: (ArrayBuffer[(Byte, Byte)],ArrayBuffer((1,2))) (interpreted path) (30 milliseconds)
[info] - encode/decode for Tuple2: (ArrayBuffer[(Boolean, Boolean)],ArrayBuffer((true,false))) (codegen path) (113 milliseconds)
[info] - With WAL enabled: createBlockRDD creates correct WALBackedBlockRDD with all block info having WAL info (124 milliseconds)
[info] - encode/decode for Tuple2: (ArrayBuffer[(Boolean, Boolean)],ArrayBuffer((true,false))) (interpreted path) (34 milliseconds)
[info] - With WAL enabled: createBlockRDD creates BlockRDD when some block info don't have WAL info (82 milliseconds)
[info] - encode/decode for Tuple2: (Seq[Seq[(Int, Int)]],List(List((1,2)))) (codegen path) (57 milliseconds)
[info] WriteAheadLogBackedBlockRDDSuite:
[info] - encode/decode for Tuple2: (Seq[Seq[(Int, Int)]],List(List((1,2)))) (interpreted path) (169 milliseconds)
[info] - encode/decode for tuple with 2 flat encoders: (1,10) (codegen path) (20 milliseconds)
[info] - encode/decode for tuple with 2 flat encoders: (1,10) (interpreted path) (4 milliseconds)
[info] - encode/decode for tuple with 2 product encoders: (PrimitiveData(1,1,1.0,1.0,1,1,true),(3,30)) (codegen path) (57 milliseconds)
[info] - encode/decode for tuple with 2 product encoders: (PrimitiveData(1,1,1.0,1.0,1,1,true),(3,30)) (interpreted path) (16 milliseconds)
[info] - encode/decode for tuple with flat encoder and product encoder: (PrimitiveData(1,1,1.0,1.0,1,1,true),3) (codegen path) (36 milliseconds)
[info] - encode/decode for tuple with flat encoder and product encoder: (PrimitiveData(1,1,1.0,1.0,1,1,true),3) (interpreted path) (14 milliseconds)
[info] - Read data available in both block manager and write ahead log (125 milliseconds)
[info] - encode/decode for tuple with product encoder and flat encoder: (3,PrimitiveData(1,1,1.0,1.0,1,1,true)) (codegen path) (39 milliseconds)
[info] - encode/decode for tuple with product encoder and flat encoder: (3,PrimitiveData(1,1,1.0,1.0,1,1,true)) (interpreted path) (12 milliseconds)
[info] - encode/decode for nested tuple encoder: (1,(10,100)) (codegen path) (51 milliseconds)
[info] - encode/decode for nested tuple encoder: (1,(10,100)) (interpreted path) (8 milliseconds)
[info] - Read data available only in block manager, not in write ahead log (107 milliseconds)
[info] - encode/decode for primitive value class: PrimitiveValueClass(42) (codegen path) (17 milliseconds)
[info] - encode/decode for primitive value class: PrimitiveValueClass(42) (interpreted path) (5 milliseconds)
[info] - encode/decode for reference value class: ReferenceValueClass(Container(1)) (codegen path) (66 milliseconds)
[info] - encode/decode for reference value class: ReferenceValueClass(Container(1)) (interpreted path) (85 milliseconds)
[info] - encode/decode for option of int: Some(31) (codegen path) (21 milliseconds)
[info] - encode/decode for option of int: Some(31) (interpreted path) (5 milliseconds)
[info] - encode/decode for empty option of int: None (codegen path) (5 milliseconds)
[info] - encode/decode for empty option of int: None (interpreted path) (4 milliseconds)
[info] - encode/decode for option of string: Some(abc) (codegen path) (21 milliseconds)
[info] - encode/decode for option of string: Some(abc) (interpreted path) (4 milliseconds)
[info] - Read data available only in write ahead log, not in block manager (224 milliseconds)
[info] - encode/decode for empty option of string: None (codegen path) (6 milliseconds)
[info] - encode/decode for empty option of string: None (interpreted path) (4 milliseconds)
[info] - encode/decode for Tuple2: (UDT,org.apache.spark.sql.catalyst.encoders.ExamplePoint@691) (codegen path) (27 milliseconds)
[info] - encode/decode for Tuple2: (UDT,org.apache.spark.sql.catalyst.encoders.ExamplePoint@691) (interpreted path) (7 milliseconds)
[info] - Read data with partially available in block manager, and rest in write ahead log (98 milliseconds)
[info] - nullable of encoder schema (codegen path) (66 milliseconds)
[info] - nullable of encoder schema (interpreted path) (104 milliseconds)
[info] - nullable of encoder serializer (codegen path) (12 milliseconds)
[info] - nullable of encoder serializer (interpreted path) (11 milliseconds)
[info] - null check for map key: String (codegen path) (31 milliseconds)
[info] - null check for map key: String (interpreted path) (32 milliseconds)
[info] - Test isBlockValid skips block fetching from BlockManager (226 milliseconds)
[info] - null check for map key: Integer (codegen path) (35 milliseconds)
[info] - null check for map key: Integer (interpreted path) (23 milliseconds)
[info] UnsafeRowConverterSuite:
[info] - basic conversion with only primitive types with CODEGEN_ONLY (30 milliseconds)
[info] - basic conversion with only primitive types with NO_CODEGEN (0 milliseconds)
[info] - basic conversion with primitive, string and binary types with CODEGEN_ONLY (11 milliseconds)
[info] - basic conversion with primitive, string and binary types with NO_CODEGEN (1 millisecond)
[info] - basic conversion with primitive, string, date and timestamp types with CODEGEN_ONLY (10 milliseconds)
[info] - basic conversion with primitive, string, date and timestamp types with NO_CODEGEN (1 millisecond)
[info] - null handling with CODEGEN_ONLY (13 milliseconds)
[info] - null handling with NO_CODEGEN (3 milliseconds)
[info] - NaN canonicalization with CODEGEN_ONLY (75 milliseconds)
[info] - NaN canonicalization with NO_CODEGEN (1 millisecond)
[info] - basic conversion with struct type with CODEGEN_ONLY (12 milliseconds)
[info] - basic conversion with struct type with NO_CODEGEN (2 milliseconds)
[info] - basic conversion with array type with CODEGEN_ONLY (9 milliseconds)
[info] - basic conversion with array type with NO_CODEGEN (1 millisecond)
[info] - basic conversion with map type with CODEGEN_ONLY (11 milliseconds)
[info] - basic conversion with map type with NO_CODEGEN (1 millisecond)
[info] - basic conversion with struct and array with CODEGEN_ONLY (13 milliseconds)
[info] - basic conversion with struct and array with NO_CODEGEN (1 millisecond)
[info] - Test whether RDD is valid after removing blocks from block manager (240 milliseconds)
[info] - basic conversion with struct and map with CODEGEN_ONLY (15 milliseconds)
[info] - basic conversion with struct and map with NO_CODEGEN (0 milliseconds)
[info] - basic conversion with array and map with CODEGEN_ONLY (12 milliseconds)
[info] - basic conversion with array and map with NO_CODEGEN (1 millisecond)
[info] FilterPushdownSuite:
[info] - eliminate subqueries (5 milliseconds)
[info] - simple push down (7 milliseconds)
[info] - combine redundant filters (7 milliseconds)
[info] - do not combine non-deterministic filters even if they are identical (4 milliseconds)
[info] - SPARK-16164: Filter pushdown should keep the ordering in the logical plan (5 milliseconds)
[info] - SPARK-16994: filter should not be pushed through limit (3 milliseconds)
[info] - can't push without rewrite (7 milliseconds)
[info] - nondeterministic: can always push down filter through project with deterministic field (13 milliseconds)
[info] - nondeterministic: can't push down filter through project with nondeterministic field (3 milliseconds)
[info] - nondeterministic: can't push down filter through aggregate with nondeterministic field (8 milliseconds)
[info] - nondeterministic: push down part of filter through aggregate with deterministic field (24 milliseconds)
[info] - filters: combines filters (7 milliseconds)
[info] - joins: push to either side (13 milliseconds)
[info] - Test storing of blocks recovered from write ahead log back into block manager (175 milliseconds)
[info] - joins: push to one side (7 milliseconds)
[info] - joins: do not push down non-deterministic filters into join condition (4 milliseconds)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@45c4283b rejected from java.util.concurrent.ThreadPoolExecutor@c6bee7[Shutting down, pool size = 5, active threads = 5, queued tasks = 0, completed tasks = 3]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.Promise$class.failure(Promise.scala:104)
	at scala.concurrent.impl.Promise$DefaultPromise.failure(Promise.scala:157)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:194)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:192)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
	at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@557a9e94 rejected from java.util.concurrent.ThreadPoolExecutor@c6bee7[Shutting down, pool size = 5, active threads = 5, queued tasks = 0, completed tasks = 3]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.Promise$class.failure(Promise.scala:104)
	at scala.concurrent.impl.Promise$DefaultPromise.failure(Promise.scala:157)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:194)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:192)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
	at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@2fbd621c rejected from java.util.concurrent.ThreadPoolExecutor@c6bee7[Shutting down, pool size = 5, active threads = 5, queued tasks = 0, completed tasks = 3]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.Promise$class.failure(Promise.scala:104)
	at scala.concurrent.impl.Promise$DefaultPromise.failure(Promise.scala:157)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:194)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:192)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
	at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@37048772 rejected from java.util.concurrent.ThreadPoolExecutor@c6bee7[Shutting down, pool size = 5, active threads = 5, queued tasks = 0, completed tasks = 3]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.Promise$class.failure(Promise.scala:104)
	at scala.concurrent.impl.Promise$DefaultPromise.failure(Promise.scala:157)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:194)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:192)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
	at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@20a4269f rejected from java.util.concurrent.ThreadPoolExecutor@c6bee7[Shutting down, pool size = 5, active threads = 5, queued tasks = 0, completed tasks = 3]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.Promise$class.failure(Promise.scala:104)
	at scala.concurrent.impl.Promise$DefaultPromise.failure(Promise.scala:157)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:194)
	at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:192)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
	at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@3dfcec02 rejected from java.util.concurrent.ThreadPoolExecutor@c6bee7[Shutting down, pool size = 5, active threads = 5, queued tasks = 0, completed tasks = 3]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@2545096c rejected from java.util.concurrent.ThreadPoolExecutor@c6bee7[Shutting down, pool size = 5, active threads = 5, queued tasks = 0, completed tasks = 3]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@5f82f5e4 rejected from java.util.concurrent.ThreadPoolExecutor@c6bee7[Shutting down, pool size = 5, active threads = 5, queued tasks = 0, completed tasks = 3]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@310c9f8e rejected from java.util.concurrent.ThreadPoolExecutor@c6bee7[Shutting down, pool size = 5, active threads = 5, queued tasks = 0, completed tasks = 3]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@2a5c4def rejected from java.util.concurrent.ThreadPoolExecutor@c6bee7[Shutting down, pool size = 5, active threads = 5, queued tasks = 0, completed tasks = 3]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
	at scala.concurrent.Promise$class.complete(Promise.scala:55)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
[info] - joins: push to one side after transformCondition (11 milliseconds)
[info] - joins: rewrite filter to push to either side (63 milliseconds)
[info] - joins: push down left semi join (58 milliseconds)
[info] - joins: push down left outer join #1 (9 milliseconds)
[info] - joins: push down right outer join #1 (10 milliseconds)
[info] - joins: push down left outer join #2 (10 milliseconds)
[info] - joins: push down right outer join #2 (10 milliseconds)
[info] - joins: push down left outer join #3 (9 milliseconds)
[info] - joins: push down right outer join #3 (9 milliseconds)
[info] - joins: push down left outer join #4 (10 milliseconds)
[info] - joins: push down right outer join #4 (13 milliseconds)
[info] - joins: push down left outer join #5 (11 milliseconds)
[info] - joins: push down right outer join #5 (12 milliseconds)
[info] - joins: can't push down (5 milliseconds)
[info] - joins: conjunctive predicates (8 milliseconds)
[info] - joins: conjunctive predicates #2 (7 milliseconds)
[info] - joins: conjunctive predicates #3 (13 milliseconds)
[info] - read data in block manager and WAL with encryption on (283 milliseconds)
[info] - joins: push down where clause into left anti join (7 milliseconds)
[info] - joins: only push down join conditions to the right of a left anti join (7 milliseconds)
[info] TimeSuite:
[info] - joins: only push down join conditions to the right of an existence join (7 milliseconds)
[info] - less (0 milliseconds)
[info] - lessEq (0 milliseconds)
[info] - greater (1 millisecond)
[info] - greaterEq (1 millisecond)
[info] - plus (1 millisecond)
[info] - minus Time (0 milliseconds)
[info] - minus Duration (0 milliseconds)
[info] - floor (0 milliseconds)
[info] - isMultipleOf (0 milliseconds)
[info] - min (0 milliseconds)
[info] - max (0 milliseconds)
[info] - until (1 millisecond)
[info] - to (0 milliseconds)
[info] - generate: predicate referenced no generated column (16 milliseconds)
[info] DStreamScopeSuite:
[info] - generate: non-deterministic predicate referenced no generated column (14 milliseconds)
[info] - generate: part of conjuncts referenced generated column (10 milliseconds)
[info] - generate: all conjuncts referenced generated column (4 milliseconds)
[info] - aggregate: push down filter when filter on group by expression (10 milliseconds)
[info] - aggregate: don't push down filter when filter not on group by expression (9 milliseconds)
[info] - dstream without scope (2 milliseconds)
[info] - aggregate: push down filters partially which are subset of group by expressions (11 milliseconds)
[info] - input dstream without scope (3 milliseconds)
[info] - scoping simple operations (10 milliseconds)
[info] - aggregate: push down filters with alias (13 milliseconds)
[info] - aggregate: push down filters with literal (11 milliseconds)
[info] - scoping nested operations (23 milliseconds)
[info] - transform should allow RDD operations to be captured in scopes (15 milliseconds)
[info] - aggregate: don't push down filters that are nondeterministic (28 milliseconds)
[info] - SPARK-17712: aggregate: don't push down filters that are data-independent (8 milliseconds)
[info] - aggregate: don't push filters if the aggregate has no grouping expressions (24 milliseconds)
[info] - broadcast hint (17 milliseconds)
[info] - union (24 milliseconds)
[info] - expand (25 milliseconds)
[info] - predicate subquery: push down simple (22 milliseconds)
[info] - predicate subquery: push down complex (21 milliseconds)
[info] - SPARK-20094: don't push predicate with IN subquery into join condition (20 milliseconds)
[info] - foreachRDD should allow RDD operations to be captured in scope (231 milliseconds)
[info] - Window: predicate push down -- basic (29 milliseconds)
[info] - Window: predicate push down -- predicates with compound predicate using only one column (15 milliseconds)
[info] - Window: predicate push down -- multi window expressions with the same window spec (18 milliseconds)
[info] - Window: predicate push down -- multi window specification - 1 (35 milliseconds)
[info] - Window: predicate push down -- multi window specification - 2 (281 milliseconds)
[info] - Window: predicate push down -- predicates with multiple partitioning columns (14 milliseconds)
[info] - Window: predicate push down -- complex predicate with the same expressions !!! IGNORED !!!
[info] - Window: no predicate push down -- predicates are not from partitioning keys (15 milliseconds)
[info] - Window: no predicate push down -- partial compound partition key (13 milliseconds)
[info] - Window: no predicate push down -- complex predicates containing non partitioning columns (12 milliseconds)
[info] - Window: no predicate push down -- complex predicate with different expressions (14 milliseconds)
[info] - join condition pushdown: deterministic and non-deterministic (12 milliseconds)
[info] - watermark pushdown: no pushdown on watermark attribute #1 (8 milliseconds)
[info] - watermark pushdown: no pushdown for nondeterministic filter (7 milliseconds)
[info] - watermark pushdown: full pushdown (4 milliseconds)
[info] - watermark pushdown: no pushdown on watermark attribute #2 (3 milliseconds)
[info] UnsafeArraySuite:
[info] - read array (236 milliseconds)
[info] - from primitive array (1 millisecond)
[info] ReceiverSuite:
[info] - to primitive array (263 milliseconds)
[info] - unsafe java serialization (2 milliseconds)
[info] - unsafe Kryo serialization (28 milliseconds)
[info] DecimalAggregatesSuite:
[info] - Decimal Sum Aggregation: Optimized (10 milliseconds)
[info] - Decimal Sum Aggregation: Not Optimized (4 milliseconds)
[info] - Decimal Average Aggregation: Optimized (5 milliseconds)
[info] - Decimal Average Aggregation: Not Optimized (3 milliseconds)
[info] - Decimal Sum Aggregation over Window: Optimized (12 milliseconds)
[info] - Decimal Sum Aggregation over Window: Not Optimized (16 milliseconds)
[info] - Decimal Average Aggregation over Window: Optimized (16 milliseconds)
[info] - Decimal Average Aggregation over Window: Not Optimized (10 milliseconds)
[info] DecimalExpressionSuite:
[info] - receiver life cycle (340 milliseconds)
[info] - block generator throttling !!! IGNORED !!!
[info] - UnscaledValue (86 milliseconds)
[info] - MakeDecimal (66 milliseconds)
[info] - PromotePrecision (75 milliseconds)
[info] - CheckOverflow (263 milliseconds)
[info] TableIdentifierParserSuite:
[info] - table identifier (31 milliseconds)
[info] - quoted identifiers (8 milliseconds)
[info] - table identifier - strict keywords (57 milliseconds)
[info] - table identifier - non reserved keywords (177 milliseconds)
[info] - SPARK-17364 table identifier - contains number (2 milliseconds)
[info] - SPARK-17832 table identifier - contains backtick (3 milliseconds)
[info] GeneratedProjectionSuite:
[info] - repartitioned RDDs perform load balancing (18 seconds, 167 milliseconds)
[info] - coalesced RDDs (219 milliseconds)
[info] - coalesced RDDs with locality (53 milliseconds)
[info] - coalesced RDDs with partial locality (40 milliseconds)
[info] - generated projections on wider table (2 seconds, 878 milliseconds)
[info] - coalesced RDDs with locality, large scale (10K partitions) (1 second, 623 milliseconds)
[info] - coalesced RDDs with partial locality, large scale (10K partitions) (595 milliseconds)
[info] - coalesced RDDs with locality, fail first pass (13 milliseconds)
[info] - zipped RDDs (56 milliseconds)
[info] - partition pruning (20 milliseconds)
[info] - write ahead log - generating and cleaning (8 seconds, 931 milliseconds)
[info] StateMapSuite:
[info] - EmptyStateMap (1 millisecond)
[info] - OpenHashMapBasedStateMap - put, get, getByTime, getAll, remove (16 milliseconds)
[info] - OpenHashMapBasedStateMap - put, get, getByTime, getAll, remove with copy (2 milliseconds)
[info] - OpenHashMapBasedStateMap - serializing and deserializing (107 milliseconds)
[info] - OpenHashMapBasedStateMap - serializing and deserializing with compaction (13 milliseconds)
[info] - run Spark in yarn-cluster mode with additional jar (22 seconds, 112 milliseconds)
[info] - collect large number of empty partitions (8 seconds, 894 milliseconds)
[info] - SPARK-18016: generated projections on wider table requiring class-splitting (11 seconds, 953 milliseconds)
[info] - generated unsafe projection with array of binary (17 milliseconds)
[info] - padding bytes should be zeroed out (9 milliseconds)
[info] - MutableProjection should not cache content from the input row (11 milliseconds)
[info] - SafeProjection should not cache content from the input row (6 milliseconds)
[info] - SPARK-22699: GenerateSafeProjection should not use global variables for struct (7 milliseconds)
[info] - take (1 second, 561 milliseconds)
[info] - top with predefined ordering (100 milliseconds)
[info] - top with custom ordering (11 milliseconds)
[info] - takeOrdered with predefined ordering (9 milliseconds)
[info] - OpenHashMapBasedStateMap - all possible sequences of operations with copies  (7 seconds, 68 milliseconds)
[info] - takeOrdered with limit 0 (1 millisecond)
[info] - takeOrdered with custom ordering (15 milliseconds)
[info] - OpenHashMapBasedStateMap - serializing and deserializing with KryoSerializable states (15 milliseconds)
[info] - EmptyStateMap - serializing and deserializing (19 milliseconds)
[info] - MapWithStateRDDRecord - serializing and deserializing with KryoSerializable states (13 milliseconds)
[info] UIUtilsSuite:
[info] - shortTimeUnitString (0 milliseconds)
[info] - normalizeDuration (4 milliseconds)
[info] - convertToTimeUnit (0 milliseconds)
[info] - formatBatchTime (1 millisecond)
[info] DurationSuite:
[info] - less (1 millisecond)
[info] - lessEq (0 milliseconds)
[info] - greater (0 milliseconds)
[info] - greaterEq (1 millisecond)
[info] - plus (0 milliseconds)
[info] - minus (1 millisecond)
[info] - times (0 milliseconds)
[info] - div (0 milliseconds)
[info] - isMultipleOf (0 milliseconds)
[info] - min (0 milliseconds)
[info] - max (1 millisecond)
[info] - isZero (0 milliseconds)
[info] - Milliseconds (1 millisecond)
[info] - Seconds (0 milliseconds)
[info] - Minutes (1 millisecond)
[info] MapWithStateRDDSuite:
[info] - isEmpty (74 milliseconds)
[info] - sample preserves partitioner (2 milliseconds)
[info] - creation from pair RDD (466 milliseconds)
[info] - updating state and generating mapped data in MapWithStateRDDRecord (6 milliseconds)
[info] - states generated by MapWithStateRDD (1 second, 264 milliseconds)
[info] - checkpointing (1 second, 92 milliseconds)
[info] - checkpointing empty state RDD (538 milliseconds)
[info] DStreamClosureSuite:
[info] - user provided closures are actually cleaned (52 milliseconds)
[info] UISeleniumSuite:
[info] - attaching and detaching a Streaming tab (2 seconds, 129 milliseconds)
[info] FileBasedWriteAheadLogSuite:
[info] - FileBasedWriteAheadLog - read all logs (40 milliseconds)
[info] - FileBasedWriteAheadLog - write logs (25 milliseconds)
[info] - FileBasedWriteAheadLog - read all logs after write (17 milliseconds)
[info] - FileBasedWriteAheadLog - clean old logs (17 milliseconds)
[info] - FileBasedWriteAheadLog - clean old logs synchronously (35 milliseconds)
[info] - FileBasedWriteAheadLog - handling file errors while reading rotating logs (47 milliseconds)
[info] - FileBasedWriteAheadLog - do not create directories or files unless write (2 milliseconds)
[info] - FileBasedWriteAheadLog - parallel recovery not enabled if closeFileAfterWrite = false (11 milliseconds)
[info] - FileBasedWriteAheadLog - seqToParIterator (97 milliseconds)
[info] - FileBasedWriteAheadLogWriter - writing data (14 milliseconds)
[info] - FileBasedWriteAheadLogWriter - syncing of data by writing and reading immediately (15 milliseconds)
[info] - SPARK-18016: generated projections on wider table requiring state compaction (8 seconds, 875 milliseconds)
[info] - FileBasedWriteAheadLogReader - sequentially reading data (3 milliseconds)
[info] AggregateOptimizeSuite:
[info] - FileBasedWriteAheadLogReader - sequentially reading data written with writer (3 milliseconds)
[info] - remove literals in grouping expression (7 milliseconds)
[info] - do not remove all grouping expressions if they are all literals (5 milliseconds)
[info] - Remove aliased literals (8 milliseconds)
[info] - remove repetition in grouping expression (6 milliseconds)
[info] InternalOutputModesSuite:
[info] - supported strings (2 milliseconds)
[info] - unsupported strings (1 millisecond)
[info] ExpressionTypeCheckingSuite:
[info] - check types for unary arithmetic (3 milliseconds)
[info] - check types for binary arithmetic (29 milliseconds)
[info] - check types for predicates (52 milliseconds)
[info] - check types for aggregates (8 milliseconds)
[info] - check types for others (6 milliseconds)
[info] - check types for CreateNamedStruct (7 milliseconds)
[info] - check types for CreateMap (6 milliseconds)
[info] - check types for ROUND/BROUND (18 milliseconds)
[info] - check types for Greatest/Least (8 milliseconds)
[info] SimplifyConditionalSuite:
[info] - simplify if (7 milliseconds)
[info] - remove unnecessary if when the outputs are semantic equivalence (5 milliseconds)
[info] - remove unreachable branches (8 milliseconds)
[info] - remove entire CaseWhen if only the else branch is reachable (8 milliseconds)
[info] - remove entire CaseWhen if the first branch is always true (11 milliseconds)
[info] - simplify CaseWhen, prune branches following a definite true (3 milliseconds)
[info] - simplify CaseWhen if all the outputs are semantic equivalence (18 milliseconds)
[info] LimitPushdownSuite:
[info] - Union: limit to each side (8 milliseconds)
[info] - Union: limit to each side with constant-foldable limit expressions (6 milliseconds)
[info] - Union: limit to each side with the new limit number (7 milliseconds)
[info] - Union: no limit to both sides if children having smaller limit values (9 milliseconds)
[info] - Union: limit to each sides if children having larger limit values (10 milliseconds)
[info] - left outer join (6 milliseconds)
[info] - left outer join and left sides are limited (6 milliseconds)
[info] - left outer join and right sides are limited (6 milliseconds)
[info] - right outer join (6 milliseconds)
[info] - right outer join and right sides are limited (6 milliseconds)
[info] - right outer join and left sides are limited (7 milliseconds)
[info] - larger limits are not pushed on top of smaller ones in right outer join (5 milliseconds)
[info] - full outer join where neither side is limited and both sides have same statistics (13 milliseconds)
[info] - full outer join where neither side is limited and left side has larger statistics (5 milliseconds)
[info] - full outer join where neither side is limited and right side has larger statistics (4 milliseconds)
[info] - full outer join where both sides are limited (5 milliseconds)
[info] StarJoinCostBasedReorderSuite:
[info] - Test 1: Star query with two dimensions and two regular tables (69 milliseconds)
[info] - Test 2: Star with a linear branch (42 milliseconds)
[info] - FileBasedWriteAheadLogReader - reading data written with writer after corrupted write (540 milliseconds)
[info] - FileBasedWriteAheadLogReader - handles errors when file doesn't exist (4 milliseconds)
[info] - FileBasedWriteAheadLogRandomReader - reading data using random reader (13 milliseconds)
[info] - Test 3: Star with derived branches (61 milliseconds)
[info] - FileBasedWriteAheadLogRandomReader- reading data using random reader written with writer (8 milliseconds)
[info] FileBasedWriteAheadLogWithFileCloseAfterWriteSuite:
[info] - FileBasedWriteAheadLog - read all logs (41 milliseconds)
[info] - FileBasedWriteAheadLog - write logs (43 milliseconds)
[info] - FileBasedWriteAheadLog - read all logs after write (64 milliseconds)
[info] - Test 4: Star with several branches (211 milliseconds)
[info] - FileBasedWriteAheadLog - clean old logs (38 milliseconds)
[info] - Test 5: RI star only (19 milliseconds)
[info] - Test 6: No RI star (16 milliseconds)
[info] CodeGeneratorWithInterpretedFallbackSuite:
[info] - FileBasedWriteAheadLog - clean old logs synchronously (31 milliseconds)
[info] - UnsafeProjection with codegen factory mode (7 milliseconds)
[info] - fallback to the interpreter mode (11 milliseconds)
[info] - codegen failures in the CODEGEN_ONLY mode (4 milliseconds)
[info] ComplexTypeSuite:
[info] - FileBasedWriteAheadLog - handling file errors while reading rotating logs (131 milliseconds)
[info] - FileBasedWriteAheadLog - do not create directories or files unless write (2 milliseconds)
[info] - FileBasedWriteAheadLog - parallel recovery not enabled if closeFileAfterWrite = false (6 milliseconds)
[info] - FileBasedWriteAheadLog - close after write flag (3 milliseconds)
[info] BatchedWriteAheadLogSuite:
[info] - BatchedWriteAheadLog - read all logs (26 milliseconds)
[info] - BatchedWriteAheadLog - write logs (19 milliseconds)
[info] - BatchedWriteAheadLog - read all logs after write (21 milliseconds)
[info] - BatchedWriteAheadLog - clean old logs (29 milliseconds)
[info] - BatchedWriteAheadLog - clean old logs synchronously (17 milliseconds)
[info] - GetArrayItem (268 milliseconds)
[info] - BatchedWriteAheadLog - handling file errors while reading rotating logs (62 milliseconds)
[info] - BatchedWriteAheadLog - do not create directories or files unless write (3 milliseconds)
[info] - BatchedWriteAheadLog - parallel recovery not enabled if closeFileAfterWrite = false (8 milliseconds)
[info] - BatchedWriteAheadLog - serializing and deserializing batched records (2 milliseconds)
[info] - BatchedWriteAheadLog - failures in wrappedLog get bubbled up (25 milliseconds)
[info] - BatchedWriteAheadLog - name log with the highest timestamp of aggregated entries (18 milliseconds)
[info] - BatchedWriteAheadLog - shutdown properly (2 milliseconds)
[info] - BatchedWriteAheadLog - fail everything in queue during shutdown (5 milliseconds)
[info] BatchedWriteAheadLogWithCloseFileAfterWriteSuite:
[info] - BatchedWriteAheadLog - read all logs (27 milliseconds)
[info] - GetMapValue (190 milliseconds)
[info] - BatchedWriteAheadLog - write logs (43 milliseconds)
[info] - BatchedWriteAheadLog - read all logs after write (49 milliseconds)
[info] - GetStructField (109 milliseconds)
[info] - BatchedWriteAheadLog - clean old logs (37 milliseconds)
[info] - GetArrayStructFields (62 milliseconds)
[info] - BatchedWriteAheadLog - clean old logs synchronously (39 milliseconds)
[info] - run Spark in yarn-cluster mode unsuccessfully (15 seconds, 27 milliseconds)
[info] - BatchedWriteAheadLog - handling file errors while reading rotating logs (192 milliseconds)
[info] - BatchedWriteAheadLog - do not create directories or files unless write (2 milliseconds)
[info] - BatchedWriteAheadLog - parallel recovery not enabled if closeFileAfterWrite = false (7 milliseconds)
[info] - BatchedWriteAheadLog - close after write flag (73 milliseconds)
[info] CheckpointSuite:
[info] - non-existent checkpoint dir (3 milliseconds)
[info] - CreateArray (486 milliseconds)
[info] - CreateMap (377 milliseconds)
[info] - MapFromArrays (244 milliseconds)
[info] - CreateStruct (168 milliseconds)
[info] - CreateNamedStruct (167 milliseconds)
[info] - test dsl for complex type (87 milliseconds)
[info] - error message of ExtractValue (1 millisecond)
[info] - ensure to preserve metadata (1 millisecond)
[info] - StringToMap (101 milliseconds)
[info] - SPARK-22693: CreateNamedStruct should not use global variables (1 millisecond)
[info] JoinReorderSuite:
[info] - reorder 3 tables (23 milliseconds)
[info] - put unjoinable item at the end and reorder 3 joinable tables (10 milliseconds)
[info] - reorder 3 tables with pure-attribute project (13 milliseconds)
[info] - reorder 3 tables - one of the leaf items is a project (14 milliseconds)
[info] - don't reorder if project contains non-attribute (12 milliseconds)
[info] - reorder 4 tables (bushy tree) (11 milliseconds)
[info] - keep the order of attributes in the final output (36 milliseconds)
[info] - SPARK-26352: join reordering should not change the order of attributes (9 milliseconds)
[info] - reorder recursively (12 milliseconds)
[info] CanonicalizeSuite:
[info] - SPARK-24276: IN expression with different order are semantically equal (5 milliseconds)
[info] - SPARK-26402: accessing nested fields with different cases in case insensitive mode (2 milliseconds)
[info] MetadataSuite:
[info] - metadata builder and getters (1 millisecond)
[info] - metadata json conversion (241 milliseconds)
[info] ComputeCurrentTimeSuite:
[info] - analyzer should replace current_timestamp with literals (4 milliseconds)
[info] - analyzer should replace current_date with literals (3 milliseconds)
[info] BitwiseExpressionsSuite:
[info] - BitwiseNOT (731 milliseconds)
[info] - BitwiseAnd (751 milliseconds)
[info] - BitwiseOr (1 second, 616 milliseconds)
[info] - BitwiseXor (444 milliseconds)
[info] RowEncoderSuite:
[info] - encode/decode: struct<null:null,boolean:boolean,byte:tinyint,short:smallint,int:int,long:bigint,float:float,double:double,decimal:decimal(38,18),string:string,binary:binary,date:date,timestamp:timestamp,udt:examplepoint> (codegen path) (145 milliseconds)
[info] - encode/decode: struct<null:null,boolean:boolean,byte:tinyint,short:smallint,int:int,long:bigint,float:float,double:double,decimal:decimal(38,18),string:string,binary:binary,date:date,timestamp:timestamp,udt:examplepoint> (interpreted path) (19 milliseconds)
[info] - encode/decode: struct<arrayOfNull:array<null>,arrayOfString:array<string>,arrayOfArrayOfString:array<array<string>>,arrayOfArrayOfInt:array<array<int>>,arrayOfMap:array<map<string,string>>,arrayOfStruct:array<struct<str:string>>,arrayOfUDT:array<examplepoint>> (codegen path) (1 second, 852 milliseconds)
[info] - takeSample (18 seconds, 786 milliseconds)
[info] - takeSample from an empty rdd (6 milliseconds)
[info] - randomSplit (234 milliseconds)
[info] - runJob on an invalid partition (3 milliseconds)
[info] - sort an empty RDD (23 milliseconds)
[info] - encode/decode: struct<arrayOfNull:array<null>,arrayOfString:array<string>,arrayOfArrayOfString:array<array<string>>,arrayOfArrayOfInt:array<array<int>>,arrayOfMap:array<map<string,string>>,arrayOfStruct:array<struct<str:string>>,arrayOfUDT:array<examplepoint>> (interpreted path) (1 second, 797 milliseconds)
[info] - sortByKey (157 milliseconds)
[info] - basic rdd checkpoints + dstream graph checkpoint recovery (9 seconds, 156 milliseconds)
[info] - sortByKey ascending parameter (99 milliseconds)
[info] - sortByKey with explicit ordering (67 milliseconds)
[info] - repartitionAndSortWithinPartitions (49 milliseconds)
[info] - recovery of conf through checkpoints (207 milliseconds)
[info] - cartesian on empty RDD (10 milliseconds)
[info] - cartesian on non-empty RDDs (37 milliseconds)
[info] - intersection (60 milliseconds)
[info] - intersection strips duplicates in an input (55 milliseconds)
[info] - zipWithIndex (24 milliseconds)
[info] - zipWithIndex with a single partition (8 milliseconds)
[info] - zipWithIndex chained with other RDDs (SPARK-4433) (26 milliseconds)
[info] - get correct spark.driver.[host|port] from checkpoint (212 milliseconds)
[info] - zipWithUniqueId (40 milliseconds)
[info] - retag with implicit ClassTag (19 milliseconds)
[info] - parent method (3 milliseconds)
[info] - getNarrowAncestors (18 milliseconds)
[info] - getNarrowAncestors with multiple parents (15 milliseconds)
[info] - getNarrowAncestors with cycles (14 milliseconds)
[info] - task serialization exception should not hang scheduler (23 milliseconds)
[info] - RDD.partitions() fails fast when partitions indicies are incorrect (SPARK-13021) (2 milliseconds)
-------------------------------------------
Time: 500 ms
-------------------------------------------
(a,2)
(b,1)

[info] - nested RDDs are not supported (SPARK-5063) (19 milliseconds)
[info] - actions cannot be performed inside of transformations (SPARK-5063) (18 milliseconds)
-------------------------------------------
Time: 1000 ms
-------------------------------------------
(,2)

-------------------------------------------
Time: 1500 ms
-------------------------------------------

-------------------------------------------
Time: 1500 ms
-------------------------------------------

-------------------------------------------
Time: 2000 ms
-------------------------------------------
(a,2)
(b,1)

-------------------------------------------
Time: 2500 ms
-------------------------------------------
(,2)

[info] - custom RDD coalescer (264 milliseconds)
[info] - SPARK-18406: race between end-of-task and completion iterator read lock release (17 milliseconds)
[info] - SPARK-23496: order of input partitions can result in severe skew in coalesce (5 milliseconds)
-------------------------------------------
Time: 3000 ms
-------------------------------------------

[info] - recovery with map and reduceByKey operations (500 milliseconds)
[info] - cannot run actions after SparkContext has been stopped (SPARK-5063) (179 milliseconds)
[info] - cannot call methods on a stopped SparkContext (SPARK-5063) (1 millisecond)
[info] BasicSchedulerIntegrationSuite:
-------------------------------------------
Time: 500 ms
-------------------------------------------
(a,1)

[info] - super simple job (107 milliseconds)
-------------------------------------------
Time: 1000 ms
-------------------------------------------
(a,2)

-------------------------------------------
Time: 1500 ms
-------------------------------------------
(a,3)

[info] - multi-stage job (304 milliseconds)
-------------------------------------------
Time: 2000 ms
-------------------------------------------
(a,4)

-------------------------------------------
Time: 2500 ms
-------------------------------------------
(a,4)

-------------------------------------------
Time: 3000 ms
-------------------------------------------
(a,4)

-------------------------------------------
Time: 3500 ms
-------------------------------------------
(a,4)

[info] - job with fetch failure (326 milliseconds)
[info] - job failure after 4 attempts (96 milliseconds)
[info] OutputCommitCoordinatorSuite:
-------------------------------------------
Time: 3500 ms
-------------------------------------------
(a,4)

-------------------------------------------
Time: 4000 ms
-------------------------------------------
(a,4)

[info] - Only one of two duplicate commit tasks should commit (81 milliseconds)
-------------------------------------------
Time: 4500 ms
-------------------------------------------
(a,4)

[info] - If commit fails, if task is retried it should not be locked, and will succeed. (52 milliseconds)
-------------------------------------------
Time: 5000 ms
-------------------------------------------
(a,4)

[info] - recovery with invertible reduceByKeyAndWindow operation (1 second, 471 milliseconds)
-------------------------------------------
Time: 500 ms
-------------------------------------------
(a,2)
(b,1)

-------------------------------------------
Time: 1000 ms
-------------------------------------------
(,2)

-------------------------------------------
Time: 1500 ms
-------------------------------------------

-------------------------------------------
Time: 1500 ms
-------------------------------------------

-------------------------------------------
Time: 2000 ms
-------------------------------------------
(a,2)
(b,1)

[info] - encode/decode: struct<mapOfIntAndString:map<int,string>,mapOfStringAndArray:map<string,array<string>>,mapOfArrayAndInt:map<array<string>,int>,mapOfArray:map<array<string>,array<string>>,mapOfStringAndStruct:map<string,struct<str:string>>,mapOfStructAndString:map<struct<str:string>,string>,mapOfStruct:map<struct<str:string>,struct<str:string>>> (codegen path) (4 seconds, 142 milliseconds)
-------------------------------------------
Time: 2500 ms
-------------------------------------------
(,2)

-------------------------------------------
Time: 3000 ms
-------------------------------------------

[info] - recovery with saveAsHadoopFiles operation (1 second, 931 milliseconds)
-------------------------------------------
Time: 500 ms
-------------------------------------------
(a,2)
(b,1)

-------------------------------------------
Time: 1000 ms
-------------------------------------------
(,2)

-------------------------------------------
Time: 1500 ms
-------------------------------------------

-------------------------------------------
Time: 1500 ms
-------------------------------------------

-------------------------------------------
Time: 2000 ms
-------------------------------------------
(a,2)
(b,1)

-------------------------------------------
Time: 2500 ms
-------------------------------------------
(,2)

-------------------------------------------
Time: 3000 ms
-------------------------------------------

[info] - recovery with saveAsNewAPIHadoopFiles operation (1 second, 378 milliseconds)
-------------------------------------------
Time: 500 ms
-------------------------------------------
(b,1)
(a,2)

-------------------------------------------
Time: 1000 ms
-------------------------------------------
(,2)

-------------------------------------------
Time: 1500 ms
-------------------------------------------

-------------------------------------------
Time: 1500 ms
-------------------------------------------

-------------------------------------------
Time: 2000 ms
-------------------------------------------
(b,1)
(a,2)

-------------------------------------------
Time: 2500 ms
-------------------------------------------
(,2)

-------------------------------------------
Time: 3000 ms
-------------------------------------------

[info] - recovery with saveAsHadoopFile inside transform operation (1 second, 526 milliseconds)
[info] - Job should not complete if all commits are denied (5 seconds, 16 milliseconds)
-------------------------------------------
Time: 500 ms
-------------------------------------------
(a,1)

[info] - Only authorized committer failures can clear the authorized committer lock (SPARK-6614) (8 milliseconds)
-------------------------------------------
Time: 1000 ms
-------------------------------------------
(a,2)

-------------------------------------------
Time: 1500 ms
-------------------------------------------
(a,3)

[info] - SPARK-19631: Do not allow failed attempts to be authorized for committing (5 milliseconds)
[info] - SPARK-24589: Differentiate tasks from different stage attempts (5 milliseconds)
-------------------------------------------
Time: 2000 ms
-------------------------------------------
(a,4)

-------------------------------------------
Time: 2500 ms
-------------------------------------------
(a,5)

-------------------------------------------
Time: 3000 ms
-------------------------------------------
(a,6)

-------------------------------------------
Time: 3500 ms
-------------------------------------------
(a,7)

-------------------------------------------
Time: 3500 ms
-------------------------------------------
(a,7)

-------------------------------------------
Time: 4000 ms
-------------------------------------------
(a,8)

[info] - encode/decode: struct<mapOfIntAndString:map<int,string>,mapOfStringAndArray:map<string,array<string>>,mapOfArrayAndInt:map<array<string>,int>,mapOfArray:map<array<string>,array<string>>,mapOfStringAndStruct:map<string,struct<str:string>>,mapOfStructAndString:map<struct<str:string>,string>,mapOfStruct:map<struct<str:string>,struct<str:string>>> (interpreted path) (4 seconds, 290 milliseconds)
-------------------------------------------
Time: 4500 ms
-------------------------------------------
(a,9)

-------------------------------------------
Time: 5000 ms
-------------------------------------------
(a,10)

[info] - recovery with updateStateByKey operation (1 second, 236 milliseconds)
[info] - encode/decode: struct<structOfString:struct<str:string>,structOfStructOfString:struct<struct:struct<str:string>>,structOfArray:struct<array:array<string>>,structOfMap:struct<map:map<string,string>>,structOfArrayAndMap:struct<array:array<string>,map:map<string,string>>,structOfUDT:struct<udt:examplepoint>> (codegen path) (217 milliseconds)
[info] - encode/decode: struct<structOfString:struct<str:string>,structOfStructOfString:struct<struct:struct<str:string>>,structOfArray:struct<array:array<string>>,structOfMap:struct<map:map<string,string>>,structOfArrayAndMap:struct<array:array<string>,map:map<string,string>>,structOfUDT:struct<udt:examplepoint>> (interpreted path) (192 milliseconds)
[info] - SPARK-24589: Make sure stage state is cleaned up (1 second, 3 milliseconds)
[info] - encode/decode decimal type (codegen path) (34 milliseconds)
[info] - encode/decode decimal type (interpreted path) (24 milliseconds)
[info] TaskMetricsSuite:
[info] - RowEncoder should preserve decimal precision and scale (codegen path) (12 milliseconds)
[info] - RowEncoder should preserve decimal precision and scale (interpreted path) (4 milliseconds)
[info] - mutating values (0 milliseconds)
[info] - mutating shuffle read metrics values (0 milliseconds)
[info] - mutating shuffle write metrics values (0 milliseconds)
[info] - RowEncoder should preserve schema nullability (codegen path) (2 milliseconds)
[info] - mutating input metrics values (1 millisecond)
[info] - mutating output metrics values (0 milliseconds)
[info] - merging multiple shuffle read metrics (0 milliseconds)
[info] - additional accumulables (2 milliseconds)
[info] - RowEncoder should preserve schema nullability (interpreted path) (2 milliseconds)
[info] - RowEncoder should preserve nested column name (codegen path) (4 milliseconds)
[info] - RowEncoder should preserve nested column name (interpreted path) (4 milliseconds)
[info] OutputCommitCoordinatorIntegrationSuite:
[info] - RowEncoder should support primitive arrays (codegen path) (74 milliseconds)
[info] - RowEncoder should support primitive arrays (interpreted path) (43 milliseconds)
[info] - RowEncoder should support array as the external type for ArrayType (codegen path) (58 milliseconds)
[info] - RowEncoder should support array as the external type for ArrayType (interpreted path) (48 milliseconds)
[info] - RowEncoder should throw RuntimeException if input row object is null (codegen path) (10 milliseconds)
[info] - RowEncoder should throw RuntimeException if input row object is null (interpreted path) (1 millisecond)
[info] - exception thrown in OutputCommitter.commitTask() (185 milliseconds)
[info] UIUtilsSuite:
[info] - RowEncoder should validate external type (codegen path) (61 milliseconds)
[info] - RowEncoder should validate external type (interpreted path) (17 milliseconds)
[info] - makeDescription(plainText = false) (29 milliseconds)
[info] - RowEncoder should preserve array nullability: ArrayType(IntegerType, containsNull = true), nullable = true (codegen path) (2 milliseconds)
[info] - RowEncoder should preserve array nullability: ArrayType(IntegerType, containsNull = true), nullable = true (interpreted path) (14 milliseconds)
[info] - makeDescription(plainText = true) (14 milliseconds)
[info] - RowEncoder should preserve array nullability: ArrayType(IntegerType, containsNull = true), nullable = false (codegen path) (2 milliseconds)
[info] - SPARK-11906: Progress bar should not overflow because of speculative tasks (3 milliseconds)
[info] - RowEncoder should preserve array nullability: ArrayType(IntegerType, containsNull = true), nullable = false (interpreted path) (2 milliseconds)
[info] - decodeURLParameter (SPARK-12708: Sorting task error in Stages Page when yarn mode.) (1 millisecond)
[info] - SPARK-20393: Prevent newline characters in parameters. (1 millisecond)
[info] - SPARK-20393: Prevent script from parameters running on page. (0 milliseconds)
[info] - SPARK-20393: Prevent javascript from parameters running on page. (1 millisecond)
[info] - RowEncoder should preserve array nullability: ArrayType(IntegerType, containsNull = false), nullable = true (codegen path) (2 milliseconds)
[info] - SPARK-20393: Prevent links from parameters on page. (0 milliseconds)
[info] - SPARK-20393: Prevent popups from parameters on page. (0 milliseconds)
[info] - RowEncoder should preserve array nullability: ArrayType(IntegerType, containsNull = false), nullable = true (interpreted path) (1 millisecond)
[info] - RowEncoder should preserve array nullability: ArrayType(IntegerType, containsNull = false), nullable = false (codegen path) (1 millisecond)
[info] - RowEncoder should preserve array nullability: ArrayType(IntegerType, containsNull = false), nullable = false (interpreted path) (1 millisecond)
[info] - RowEncoder should preserve array nullability: ArrayType(StringType, containsNull = true), nullable = true (codegen path) (2 milliseconds)
[info] - RowEncoder should preserve array nullability: ArrayType(StringType, containsNull = true), nullable = true (interpreted path) (1 millisecond)
[info] - RowEncoder should preserve array nullability: ArrayType(StringType, containsNull = true), nullable = false (codegen path) (1 millisecond)
[info] - RowEncoder should preserve array nullability: ArrayType(StringType, containsNull = true), nullable = false (interpreted path) (1 millisecond)
[info] SumEvaluatorSuite:
[info] - RowEncoder should preserve array nullability: ArrayType(StringType, containsNull = false), nullable = true (codegen path) (2 milliseconds)
[info] - RowEncoder should preserve array nullability: ArrayType(StringType, containsNull = false), nullable = true (interpreted path) (1 millisecond)
[info] - RowEncoder should preserve array nullability: ArrayType(StringType, containsNull = false), nullable = false (codegen path) (9 milliseconds)
[info] - correct handling of count 1 (7 milliseconds)
[info] - correct handling of count 0 (0 milliseconds)
[info] - RowEncoder should preserve array nullability: ArrayType(StringType, containsNull = false), nullable = false (interpreted path) (2 milliseconds)
[info] - correct handling of NaN (0 milliseconds)
[info] - RowEncoder should preserve map nullability: MapType(IntegerType, IntegerType, valueContainsNull = true), nullable = true (codegen path) (3 milliseconds)
[info] - RowEncoder should preserve map nullability: MapType(IntegerType, IntegerType, valueContainsNull = true), nullable = true (interpreted path) (2 milliseconds)
[info] - RowEncoder should preserve map nullability: MapType(IntegerType, IntegerType, valueContainsNull = true), nullable = false (codegen path) (2 milliseconds)
[info] - RowEncoder should preserve map nullability: MapType(IntegerType, IntegerType, valueContainsNull = true), nullable = false (interpreted path) (2 milliseconds)
[info] - correct handling of > 1 values (8 milliseconds)
[info] - RowEncoder should preserve map nullability: MapType(IntegerType, IntegerType, valueContainsNull = false), nullable = true (codegen path) (2 milliseconds)
[info] - test count > 1 (2 milliseconds)
[info] - RowEncoder should preserve map nullability: MapType(IntegerType, IntegerType, valueContainsNull = false), nullable = true (interpreted path) (2 milliseconds)
[info] - RowEncoder should preserve map nullability: MapType(IntegerType, IntegerType, valueContainsNull = false), nullable = false (codegen path) (2 milliseconds)
[info] - RowEncoder should preserve map nullability: MapType(IntegerType, IntegerType, valueContainsNull = false), nullable = false (interpreted path) (2 milliseconds)
[info] - RowEncoder should preserve map nullability: MapType(IntegerType, StringType, valueContainsNull = true), nullable = true (codegen path) (2 milliseconds)
[info] ApplicationCacheSuite:
[info] - RowEncoder should preserve map nullability: MapType(IntegerType, StringType, valueContainsNull = true), nullable = true (interpreted path) (2 milliseconds)
[info] - RowEncoder should preserve map nullability: MapType(IntegerType, StringType, valueContainsNull = true), nullable = false (codegen path) (2 milliseconds)
[info] - RowEncoder should preserve map nullability: MapType(IntegerType, StringType, valueContainsNull = true), nullable = false (interpreted path) (2 milliseconds)
[info] - RowEncoder should preserve map nullability: MapType(IntegerType, StringType, valueContainsNull = false), nullable = true (codegen path) (2 milliseconds)
[info] - RowEncoder should preserve map nullability: MapType(IntegerType, StringType, valueContainsNull = false), nullable = true (interpreted path) (2 milliseconds)
[info] - RowEncoder should preserve map nullability: MapType(IntegerType, StringType, valueContainsNull = false), nullable = false (codegen path) (2 milliseconds)
[info] - RowEncoder should preserve map nullability: MapType(IntegerType, StringType, valueContainsNull = false), nullable = false (interpreted path) (2 milliseconds)
[info] - RowEncoder should preserve map nullability: MapType(StringType, IntegerType, valueContainsNull = true), nullable = true (codegen path) (2 milliseconds)
[info] - RowEncoder should preserve map nullability: MapType(StringType, IntegerType, valueContainsNull = true), nullable = true (interpreted path) (2 milliseconds)
[info] - RowEncoder should preserve map nullability: MapType(StringType, IntegerType, valueContainsNull = true), nullable = false (codegen path) (2 milliseconds)
[info] - RowEncoder should preserve map nullability: MapType(StringType, IntegerType, valueContainsNull = true), nullable = false (interpreted path) (2 milliseconds)
[info] - RowEncoder should preserve map nullability: MapType(StringType, IntegerType, valueContainsNull = false), nullable = true (codegen path) (2 milliseconds)
[info] - RowEncoder should preserve map nullability: MapType(StringType, IntegerType, valueContainsNull = false), nullable = true (interpreted path) (2 milliseconds)
[info] - RowEncoder should preserve map nullability: MapType(StringType, IntegerType, valueContainsNull = false), nullable = false (codegen path) (2 milliseconds)
[info] - RowEncoder should preserve map nullability: MapType(StringType, IntegerType, valueContainsNull = false), nullable = false (interpreted path) (2 milliseconds)
[info] - RowEncoder should preserve map nullability: MapType(StringType, StringType, valueContainsNull = true), nullable = true (codegen path) (2 milliseconds)
[info] - RowEncoder should preserve map nullability: MapType(StringType, StringType, valueContainsNull = true), nullable = true (interpreted path) (2 milliseconds)
[info] - RowEncoder should preserve map nullability: MapType(StringType, StringType, valueContainsNull = true), nullable = false (codegen path) (2 milliseconds)
[info] - RowEncoder should preserve map nullability: MapType(StringType, StringType, valueContainsNull = true), nullable = false (interpreted path) (2 milliseconds)
[info] - RowEncoder should preserve map nullability: MapType(StringType, StringType, valueContainsNull = false), nullable = true (codegen path) (23 milliseconds)
[info] - Completed UI get (59 milliseconds)
[info] - RowEncoder should preserve map nullability: MapType(StringType, StringType, valueContainsNull = false), nullable = true (interpreted path) (4 milliseconds)
[info] - Test that if an attempt ID is set, it must be used in lookups (4 milliseconds)
[info] - RowEncoder should preserve map nullability: MapType(StringType, StringType, valueContainsNull = false), nullable = false (codegen path) (2 milliseconds)
[info] - RowEncoder should preserve map nullability: MapType(StringType, StringType, valueContainsNull = false), nullable = false (interpreted path) (2 milliseconds)
[info] - Incomplete apps refreshed (12 milliseconds)
[info] BooleanSimplificationSuite:
[info] - a && a => a (26 milliseconds)
[info] - a || a => a (6 milliseconds)
[info] - (a && b && c && ...) || (a && b && d && ...) || (a && b && e && ...) ... (16 milliseconds)
[info] - (a || b || c || ...) && (a || b || d || ...) && (a || b || e || ...) ... (23 milliseconds)
[info] - e && (!e || f) - not nullable (13 milliseconds)
[info] - e && (!e || f) - nullable (21 milliseconds)
[info] - a < 1 && (!(a < 1) || f) - not nullable (25 milliseconds)
[info] - a < 1 && ((a >= 1) || f) - not nullable (45 milliseconds)
[info] - DeMorgan's law (14 milliseconds)
[info] - (a && b) || (a && c) => a && (b || c) when case insensitive (5 milliseconds)
[info] - (a || b) && (a || c) => a || (b && c) when case insensitive (4 milliseconds)
[info] - Complementation Laws (14 milliseconds)
[info] - Complementation Laws - null handling (11 milliseconds)
[info] - Complementation Laws - negative case (9 milliseconds)
[info] - Large Scale Application Eviction (319 milliseconds)
[info] - Attempts are Evicted (16 milliseconds)
[info] - redirect includes query params (29 milliseconds)
[info] RpcAddressSuite:
[info] - hostPort (0 milliseconds)
[info] - fromSparkURL (1 millisecond)
[info] - fromSparkURL: a typo url (1 millisecond)
[info] - fromSparkURL: invalid scheme (1 millisecond)
[info] - toSparkURL (0 milliseconds)
[info] HistoryServerSuite:
[info] - filter reduction - positive cases (505 milliseconds)
[info] DSLHintSuite:
[info] - various hint parameters (3 milliseconds)
[info] LikeSimplificationSuite:
[info] - simplify Like into StartsWith (6 milliseconds)
[info] - simplify Like into EndsWith (2 milliseconds)
[info] - simplify Like into startsWith and EndsWith (4 milliseconds)
[info] - simplify Like into Contains (3 milliseconds)
[info] - simplify Like into EqualTo (3 milliseconds)
[info] - null pattern (1 millisecond)
[info] EliminateSortsSuite:
[info] - Empty order by clause (3 milliseconds)
[info] - All the SortOrder are no-op (2 milliseconds)
[info] - Partial order-by clauses contain no-op SortOrder (4 milliseconds)
[info] - Remove no-op alias (8 milliseconds)
[info] DecimalPrecisionSuite:
[info] - basic operations (57 milliseconds)
[info] - Comparison operations (17 milliseconds)
[info] - decimal precision for union (30 milliseconds)
[info] - bringing in primitive types (17 milliseconds)
[info] - maximum decimals (55 milliseconds)
[info] - DecimalType.isWiderThan (0 milliseconds)
[info] - strength reduction for integer/decimal comparisons - basic test (31 milliseconds)
[info] - strength reduction for integer/decimal comparisons - overflow test (47 milliseconds)
[info] - SPARK-24468: operations on decimals with negative scale (8 milliseconds)
[info] SubstituteUnresolvedOrdinalsSuite:
[info] - unresolved ordinal should not be unresolved (1 millisecond)
[info] - order by ordinal (8 milliseconds)
[info] - group by ordinal (6 milliseconds)
[info] StructTypeSuite:
[info] - lookup a single missing field should output existing fields (2 milliseconds)
[info] - lookup a set of missing fields should output existing fields (2 milliseconds)
[info] - lookup fieldIndex for missing field should output existing fields (1 millisecond)
[info] - SPARK-24849: toDDL - simple struct (2 milliseconds)
[info] - SPARK-24849: round trip toDDL - fromDDL (1 millisecond)
[info] - SPARK-24849: round trip fromDDL - toDDL (3 milliseconds)
[info] - SPARK-24849: toDDL must take into account case of fields. (1 millisecond)
[info] - SPARK-24849: toDDL should output field's comment (2 milliseconds)
[info] ReorderAssociativeOperatorSuite:
[info] - Reorder associative operators (44 milliseconds)
[info] - nested expression with aggregate operator (27 milliseconds)
[info] LiteralExpressionSuite:
[info] - null (436 milliseconds)
[info] - recovery maintains rate controller (2 seconds, 765 milliseconds)
[info] - application list json (1 second, 246 milliseconds)
[info] - completed app list json (41 milliseconds)
[info] - running app list json (9 milliseconds)
[info] - minDate app list json (13 milliseconds)
[info] - maxDate app list json (9 milliseconds)
[info] - maxDate2 app list json (9 milliseconds)
[info] - default (528 milliseconds)
[info] - minEndDate app list json (11 milliseconds)
[info] - maxEndDate app list json (26 milliseconds)
[info] - minEndDate and maxEndDate app list json (9 milliseconds)
[info] - minDate and maxEndDate app list json (10 milliseconds)
[info] - limit app list json (8 milliseconds)
[info] - boolean literals (93 milliseconds)
[info] - one app json (59 milliseconds)
[info] - one app multi-attempt json (9 milliseconds)
[info] - int literals (526 milliseconds)
[info] - double literals (430 milliseconds)
[info] - job list json (978 milliseconds)
[info] - string literals (86 milliseconds)
[info] - sum two literals (46 milliseconds)
[info] - binary literals (44 milliseconds)
[info] - job list from multi-attempt app json(1) (775 milliseconds)
Exception in thread "streaming-job-executor-0" java.lang.Error: java.lang.InterruptedException
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1155)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.InterruptedException
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
	at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:206)
	at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:222)
	at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:157)
	at org.apache.spark.util.ThreadUtils$.awaitReady(ThreadUtils.scala:243)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:729)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
	at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:990)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
	at org.apache.spark.rdd.RDD.withScope(RDD.scala:385)
	at org.apache.spark.rdd.RDD.collect(RDD.scala:989)
	at org.apache.spark.streaming.TestOutputStream$$anonfun$$lessinit$greater$1.apply(TestSuiteBase.scala:100)
	at org.apache.spark.streaming.TestOutputStream$$anonfun$$lessinit$greater$1.apply(TestSuiteBase.scala:99)
	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:51)
	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51)
	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51)
	at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416)
	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:50)
	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
	at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
	at scala.util.Try$.apply(Try.scala:192)
	at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39)
	at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:257)
	at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257)
	at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257)
	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
	at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:256)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	... 2 more
[info] - decimal (1 second, 43 milliseconds)
[info] - array (249 milliseconds)
[info] - seq (81 milliseconds)
[info] - map (96 milliseconds)
[info] - job list from multi-attempt app json(2) (853 milliseconds)
[info] - one job json (8 milliseconds)
[info] - succeeded job list json (9 milliseconds)
[info] - succeeded&failed job list json (33 milliseconds)
[info] - executor list json (24 milliseconds)
[info] - stage list json (65 milliseconds)
[info] - struct (188 milliseconds)
[info] - unsupported types (map and struct) in Literal.apply (1 millisecond)
[info] - complete stage list json (13 milliseconds)
[info] - failed stage list json (10 milliseconds)
[info] - SPARK-24571: char literals (58 milliseconds)
[info] PullOutPythonUDFInJoinConditionSuite:
[info] - one stage json (55 milliseconds)
[info] - inner join condition with python udf (27 milliseconds)
[info] - unevaluable python udf and common condition (7 milliseconds)
[info] - unevaluable python udf or common condition (9 milliseconds)
[info] - pull out whole complex condition with multiple unevaluable python udf (9 milliseconds)
[info] - partial pull out complex condition with multiple unevaluable python udf (7 milliseconds)
[info] - pull out unevaluable python udf when it's mixed with evaluable one (4 milliseconds)
[info] - one stage attempt json (57 milliseconds)
[info] - throw an exception for not supported join types (11 milliseconds)
[info] - recovery with file input stream (3 seconds, 495 milliseconds)
[info] AnalysisErrorSuite:
[info] - scalar subquery with 2 columns (6 milliseconds)
[info] - scalar subquery with no column (1 millisecond)
[info] - single invalid type, single arg (11 milliseconds)
[info] - single invalid type, second arg (2 milliseconds)
[info] - multiple invalid type (2 milliseconds)
[info] - invalid window function (6 milliseconds)
[info] - distinct aggregate function in window (5 milliseconds)
[info] - distinct function (9 milliseconds)
[info] - distinct window function (6 milliseconds)
[info] - nested aggregate functions (2 milliseconds)
[info] - offset window function (2 milliseconds)
[info] - too many generators (3 milliseconds)
[info] - unresolved attributes (3 milliseconds)
[info] - unresolved attributes with a generated name (15 milliseconds)
[info] - unresolved star expansion in max (3 milliseconds)
[info] - bad casts (2 milliseconds)
[info] - sorting by unsupported column types (2 milliseconds)
[info] - sorting by attributes are not from grouping expressions (9 milliseconds)
[info] - non-boolean filters (2 milliseconds)
[info] - non-boolean join conditions (3 milliseconds)
[info] - missing group by (1 millisecond)
[info] - ambiguous field (2 milliseconds)
[info] - ambiguous field due to case insensitivity (1 millisecond)
[info] - missing field (1 millisecond)
[info] - catch all unresolved plan (1 millisecond)
[info] - union with unequal number of columns (1 millisecond)
[info] - intersect with unequal number of columns (1 millisecond)
[info] - except with unequal number of columns (1 millisecond)
[info] - union with incompatible column types (1 millisecond)
[info] - union with a incompatible column type and compatible column types (1 millisecond)
[info] - intersect with incompatible column types (2 milliseconds)
[info] - intersect with a incompatible column type and compatible column types (2 milliseconds)
[info] - except with incompatible column types (2 milliseconds)
[info] - except with a incompatible column type and compatible column types (2 milliseconds)
[info] - SPARK-9955: correct error message for aggregate (3 milliseconds)
[info] - slide duration greater than window in time window (5 milliseconds)
[info] - start time greater than slide duration in time window (4 milliseconds)
[info] - start time equal to slide duration in time window (3 milliseconds)
[info] - SPARK-21590: absolute value of start time greater than slide duration in time window (3 milliseconds)
[info] - SPARK-21590: absolute value of start time equal to slide duration in time window (2 milliseconds)
[info] - negative window duration in time window (2 milliseconds)
[info] - zero window duration in time window (2 milliseconds)
[info] - negative slide duration in time window (4 milliseconds)
[info] - zero slide duration in time window (3 milliseconds)
[info] - generator nested in expressions (3 milliseconds)
[info] - generator appears in operator which is not Project (3 milliseconds)
[info] - an evaluated limit class must not be null (2 milliseconds)
[info]