Started by an SCM change Running as SYSTEM [EnvInject] - Loading node environment variables. [EnvInject] - Preparing an environment for the build. [EnvInject] - Keeping Jenkins system variables. [EnvInject] - Keeping Jenkins build variables. [EnvInject] - Injecting as environment variables the properties content AMPLAB_JENKINS_BUILD_PROFILE=hadoop2.6 SPARK_BRANCH=branch-2.4 PATH=/home/anaconda/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games LANG=en_US.UTF-8 SPARK_TESTING=1 JAVA_HOME=/usr/java/latest AMPLAB_JENKINS="true" [EnvInject] - Variables injected successfully. [EnvInject] - Injecting contributions. Building remotely on research-jenkins-worker-09 (ubuntu ubuntu-gpu research-09 ubuntu-avx2) in workspace /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6 The recommended git tool is: NONE No credentials specified > git rev-parse --is-inside-work-tree # timeout=10 Fetching changes from the remote Git repository > git config remote.origin.url https://github.com/apache/spark.git # timeout=10 Fetching upstream changes from https://github.com/apache/spark.git > git --version # timeout=10 > git --version # 'git version 2.7.4' > git fetch --tags --progress https://github.com/apache/spark.git +refs/heads/*:refs/remotes/origin/* # timeout=10 > git rev-parse origin/branch-2.4^{commit} # timeout=10 Checking out Revision e0e1e21ee84bb3cb20eefee982da59afaa250a2c (origin/branch-2.4) > git config core.sparsecheckout # timeout=10 > git checkout -f e0e1e21ee84bb3cb20eefee982da59afaa250a2c # timeout=10 Commit message: "[SPARK-34125][CORE][2.4] Make EventLoggingListener.codecMap thread-safe" > git rev-list --no-walk 7ae6c8d985b8f512555a9f373b7b445216006c53 # timeout=10 [spark-branch-2.4-test-sbt-hadoop-2.6] $ /bin/bash /tmp/jenkins8684197816929567029.sh Removing R/SparkR.Rcheck/ Removing R/SparkR_2.4.8.tar.gz Removing R/cran-check.out Removing R/lib/ Removing R/pkg/man/ Removing R/pkg/tests/fulltests/Rplots.pdf Removing R/target/ Removing R/unit-tests.out Removing append/ Removing assembly/target/ Removing build/sbt-launch-0.13.17.jar Removing build/scala-2.11.12/ Removing build/zinc-0.3.15/ Removing common/kvstore/target/ Removing common/network-common/target/ Removing common/network-shuffle/target/ Removing common/network-yarn/target/ Removing common/sketch/target/ Removing common/tags/target/ Removing common/unsafe/target/ Removing core/derby.log Removing core/dummy/ Removing core/ignored/ Removing core/metastore_db/ Removing core/target/ Removing derby.log Removing dev/__pycache__/ Removing dev/create-release/__pycache__/ Removing dev/lint-r-report.log Removing dev/pr-deps/ Removing dev/pycodestyle-2.4.0.py Removing dev/sparktestsupport/__init__.pyc Removing dev/sparktestsupport/__pycache__/ Removing dev/sparktestsupport/modules.pyc Removing dev/sparktestsupport/shellutils.pyc Removing dev/sparktestsupport/toposort.pyc Removing dev/target/ Removing examples/src/main/python/__pycache__/ Removing examples/src/main/python/ml/__pycache__/ Removing examples/src/main/python/mllib/__pycache__/ Removing examples/src/main/python/sql/__pycache__/ Removing examples/src/main/python/sql/streaming/__pycache__/ Removing examples/src/main/python/streaming/__pycache__/ Removing examples/target/ Removing external/avro/spark-warehouse/ Removing external/avro/target/ Removing external/flume-assembly/target/ Removing external/flume-sink/target/ Removing external/flume/checkpoint/ Removing external/flume/target/ Removing external/kafka-0-10-assembly/target/ Removing external/kafka-0-10-sql/spark-warehouse/ Removing external/kafka-0-10-sql/target/ Removing external/kafka-0-10/target/ Removing external/kafka-0-8-assembly/target/ Removing external/kafka-0-8/target/ Removing external/kinesis-asl-assembly/target/ Removing external/kinesis-asl/checkpoint/ Removing external/kinesis-asl/src/main/python/examples/streaming/__pycache__/ Removing external/kinesis-asl/target/ Removing external/spark-ganglia-lgpl/target/ Removing graphx/target/ Removing launcher/target/ Removing lib/ Removing logs/ Removing metastore_db/ Removing mllib-local/target/ Removing mllib/checkpoint/ Removing mllib/spark-warehouse/ Removing mllib/target/ Removing project/project/ Removing project/target/ Removing python/.eggs/ Removing python/__pycache__/ Removing python/dist/ Removing python/docs/__pycache__/ Removing python/docs/_build/ Removing python/docs/epytext.pyc Removing python/lib/pyspark.zip Removing python/pyspark.egg-info/ Removing python/pyspark/__init__.pyc Removing python/pyspark/__pycache__/ Removing python/pyspark/_globals.pyc Removing python/pyspark/accumulators.pyc Removing python/pyspark/broadcast.pyc Removing python/pyspark/cloudpickle.pyc Removing python/pyspark/conf.pyc Removing python/pyspark/context.pyc Removing python/pyspark/files.pyc Removing python/pyspark/find_spark_home.pyc Removing python/pyspark/heapq3.pyc Removing python/pyspark/java_gateway.pyc Removing python/pyspark/join.pyc Removing python/pyspark/ml/__init__.pyc Removing python/pyspark/ml/__pycache__/ Removing python/pyspark/ml/base.pyc Removing python/pyspark/ml/classification.pyc Removing python/pyspark/ml/clustering.pyc Removing python/pyspark/ml/common.pyc Removing python/pyspark/ml/evaluation.pyc Removing python/pyspark/ml/feature.pyc Removing python/pyspark/ml/fpm.pyc Removing python/pyspark/ml/image.pyc Removing python/pyspark/ml/linalg/__init__.pyc Removing python/pyspark/ml/linalg/__pycache__/ Removing python/pyspark/ml/param/__init__.pyc Removing python/pyspark/ml/param/__pycache__/ Removing python/pyspark/ml/param/shared.pyc Removing python/pyspark/ml/pipeline.pyc Removing python/pyspark/ml/recommendation.pyc Removing python/pyspark/ml/regression.pyc Removing python/pyspark/ml/stat.pyc Removing python/pyspark/ml/tests.pyc Removing python/pyspark/ml/tuning.pyc Removing python/pyspark/ml/util.pyc Removing python/pyspark/ml/wrapper.pyc Removing python/pyspark/mllib/__init__.pyc Removing python/pyspark/mllib/__pycache__/ Removing python/pyspark/mllib/classification.pyc Removing python/pyspark/mllib/clustering.pyc Removing python/pyspark/mllib/common.pyc Removing python/pyspark/mllib/evaluation.pyc Removing python/pyspark/mllib/feature.pyc Removing python/pyspark/mllib/fpm.pyc Removing python/pyspark/mllib/linalg/__init__.pyc Removing python/pyspark/mllib/linalg/__pycache__/ Removing python/pyspark/mllib/linalg/distributed.pyc Removing python/pyspark/mllib/random.pyc Removing python/pyspark/mllib/recommendation.pyc Removing python/pyspark/mllib/regression.pyc Removing python/pyspark/mllib/stat/KernelDensity.pyc Removing python/pyspark/mllib/stat/__init__.pyc Removing python/pyspark/mllib/stat/__pycache__/ Removing python/pyspark/mllib/stat/_statistics.pyc Removing python/pyspark/mllib/stat/distribution.pyc Removing python/pyspark/mllib/stat/test.pyc Removing python/pyspark/mllib/tests.pyc Removing python/pyspark/mllib/tree.pyc Removing python/pyspark/mllib/util.pyc Removing python/pyspark/profiler.pyc Removing python/pyspark/python/ Removing python/pyspark/rdd.pyc Removing python/pyspark/rddsampler.pyc Removing python/pyspark/resultiterable.pyc Removing python/pyspark/serializers.pyc Removing python/pyspark/shuffle.pyc Removing python/pyspark/sql/__init__.pyc Removing python/pyspark/sql/__pycache__/ Removing python/pyspark/sql/catalog.pyc Removing python/pyspark/sql/column.pyc Removing python/pyspark/sql/conf.pyc Removing python/pyspark/sql/context.pyc Removing python/pyspark/sql/dataframe.pyc Removing python/pyspark/sql/functions.pyc Removing python/pyspark/sql/group.pyc Removing python/pyspark/sql/readwriter.pyc Removing python/pyspark/sql/session.pyc Removing python/pyspark/sql/streaming.pyc Removing python/pyspark/sql/tests.pyc Removing python/pyspark/sql/types.pyc Removing python/pyspark/sql/udf.pyc Removing python/pyspark/sql/utils.pyc Removing python/pyspark/sql/window.pyc Removing python/pyspark/statcounter.pyc Removing python/pyspark/status.pyc Removing python/pyspark/storagelevel.pyc Removing python/pyspark/streaming/__init__.pyc Removing python/pyspark/streaming/__pycache__/ Removing python/pyspark/streaming/context.pyc Removing python/pyspark/streaming/dstream.pyc Removing python/pyspark/streaming/flume.pyc Removing python/pyspark/streaming/kafka.pyc Removing python/pyspark/streaming/kinesis.pyc Removing python/pyspark/streaming/listener.pyc Removing python/pyspark/streaming/tests.pyc Removing python/pyspark/streaming/util.pyc Removing python/pyspark/taskcontext.pyc Removing python/pyspark/test_broadcast.pyc Removing python/pyspark/test_serializers.pyc Removing python/pyspark/tests.pyc Removing python/pyspark/traceback_utils.pyc Removing python/pyspark/util.pyc Removing python/pyspark/version.pyc Removing python/pyspark/worker.pyc Removing python/target/ Removing python/test_coverage/__pycache__/ Removing python/test_support/__pycache__/ Removing repl/spark-warehouse/ Removing repl/target/ Removing resource-managers/kubernetes/core/target/ Removing resource-managers/kubernetes/integration-tests/tests/__pycache__/ Removing resource-managers/mesos/target/ Removing resource-managers/yarn/target/ Removing scalastyle-on-compile.generated.xml Removing spark-warehouse/ Removing sql/__pycache__/ Removing sql/catalyst/loc/ Removing sql/catalyst/target/ Removing sql/core/loc/ Removing sql/core/paris/ Removing sql/core/spark-warehouse/ Removing sql/core/target/ Removing sql/hive-thriftserver/derby.log Removing sql/hive-thriftserver/metastore_db/ Removing sql/hive-thriftserver/spark-warehouse/ Removing sql/hive-thriftserver/target/ Removing sql/hive/derby.log Removing sql/hive/loc/ Removing sql/hive/metastore_db/ Removing sql/hive/src/test/resources/data/scripts/__pycache__/ Removing sql/hive/target/ Removing streaming/checkpoint/ Removing streaming/target/ Removing target/ Removing tools/target/ Removing work/ +++ dirname /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/install-dev.sh ++ cd /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R ++ pwd + FWDIR=/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R + LIB_DIR=/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/lib + mkdir -p /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/lib + pushd /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R + . /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/find-r.sh ++ '[' -z '' ']' ++ '[' '!' -z '' ']' +++ command -v R ++ '[' '!' /usr/bin/R ']' ++++ which R +++ dirname /usr/bin/R ++ R_SCRIPT_PATH=/usr/bin ++ echo 'Using R_SCRIPT_PATH = /usr/bin' Using R_SCRIPT_PATH = /usr/bin + . /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/create-rd.sh ++ set -o pipefail ++ set -e ++++ dirname /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/create-rd.sh +++ cd /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R +++ pwd ++ FWDIR=/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R ++ pushd /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R ++ . /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/find-r.sh +++ '[' -z /usr/bin ']' ++ /usr/bin/Rscript -e ' if("devtools" %in% rownames(installed.packages())) { library(devtools); devtools::document(pkg="./pkg", roclets=c("rd")) }' Loading required package: usethis Updating SparkR documentation First time using roxygen2. Upgrading automatically... Updating roxygen version in /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/pkg/DESCRIPTION Loading SparkR Creating a new generic function for ‘as.data.frame’ in package ‘SparkR’ Creating a new generic function for ‘colnames’ in package ‘SparkR’ Creating a new generic function for ‘colnames<-’ in package ‘SparkR’ Creating a new generic function for ‘cov’ in package ‘SparkR’ Creating a new generic function for ‘drop’ in package ‘SparkR’ Creating a new generic function for ‘na.omit’ in package ‘SparkR’ Creating a new generic function for ‘filter’ in package ‘SparkR’ Creating a new generic function for ‘intersect’ in package ‘SparkR’ Creating a new generic function for ‘sample’ in package ‘SparkR’ Creating a new generic function for ‘transform’ in package ‘SparkR’ Creating a new generic function for ‘subset’ in package ‘SparkR’ Creating a new generic function for ‘summary’ in package ‘SparkR’ Creating a new generic function for ‘union’ in package ‘SparkR’ Creating a new generic function for ‘endsWith’ in package ‘SparkR’ Creating a new generic function for ‘startsWith’ in package ‘SparkR’ Creating a new generic function for ‘lag’ in package ‘SparkR’ Creating a new generic function for ‘rank’ in package ‘SparkR’ Creating a new generic function for ‘sd’ in package ‘SparkR’ Creating a new generic function for ‘var’ in package ‘SparkR’ Creating a new generic function for ‘window’ in package ‘SparkR’ Creating a new generic function for ‘predict’ in package ‘SparkR’ Creating a new generic function for ‘rbind’ in package ‘SparkR’ Creating a generic function for ‘substr’ from package ‘base’ in package ‘SparkR’ Creating a generic function for ‘%in%’ from package ‘base’ in package ‘SparkR’ Creating a generic function for ‘lapply’ from package ‘base’ in package ‘SparkR’ Creating a generic function for ‘Filter’ from package ‘base’ in package ‘SparkR’ Creating a generic function for ‘nrow’ from package ‘base’ in package ‘SparkR’ Creating a generic function for ‘ncol’ from package ‘base’ in package ‘SparkR’ Creating a generic function for ‘factorial’ from package ‘base’ in package ‘SparkR’ Creating a generic function for ‘atan2’ from package ‘base’ in package ‘SparkR’ Creating a generic function for ‘ifelse’ from package ‘base’ in package ‘SparkR’ Warning: [/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/pkg/R/SQLContext.R:592] @name May only use one @name per block Warning: [/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/pkg/R/SQLContext.R:733] @name May only use one @name per block Writing structType.Rd Writing print.structType.Rd Writing structField.Rd Writing print.structField.Rd Writing summarize.Rd Writing alias.Rd Writing arrange.Rd Writing as.data.frame.Rd Writing cache.Rd Writing checkpoint.Rd Writing coalesce.Rd Writing collect.Rd Writing columns.Rd Writing coltypes.Rd Writing count.Rd Writing cov.Rd Writing corr.Rd Writing createOrReplaceTempView.Rd Writing cube.Rd Writing dapply.Rd Writing dapplyCollect.Rd Writing gapply.Rd Writing gapplyCollect.Rd Writing describe.Rd Writing distinct.Rd Writing drop.Rd Writing dropDuplicates.Rd Writing nafunctions.Rd Writing dtypes.Rd Writing explain.Rd Writing except.Rd Writing exceptAll.Rd Writing filter.Rd Writing first.Rd Writing groupBy.Rd Writing hint.Rd Writing insertInto.Rd Writing intersect.Rd Writing intersectAll.Rd Writing isLocal.Rd Writing isStreaming.Rd Writing limit.Rd Writing localCheckpoint.Rd Writing merge.Rd Writing mutate.Rd Writing orderBy.Rd Writing persist.Rd Writing printSchema.Rd Writing registerTempTable-deprecated.Rd Writing rename.Rd Writing repartition.Rd Writing repartitionByRange.Rd Writing sample.Rd Writing rollup.Rd Writing sampleBy.Rd Writing saveAsTable.Rd Writing take.Rd Writing write.df.Rd Writing write.jdbc.Rd Writing write.json.Rd Writing write.orc.Rd Writing write.parquet.Rd Writing write.stream.Rd Writing write.text.Rd Writing schema.Rd Writing select.Rd Writing selectExpr.Rd Writing showDF.Rd Writing subset.Rd Writing summary.Rd Writing union.Rd Writing unionByName.Rd Writing unpersist.Rd Writing with.Rd Writing withColumn.Rd Writing withWatermark.Rd Writing randomSplit.Rd Writing broadcast.Rd Writing columnfunctions.Rd Writing between.Rd Writing cast.Rd Writing endsWith.Rd Writing startsWith.Rd Writing column_nonaggregate_functions.Rd Writing otherwise.Rd Writing over.Rd Writing eq_null_safe.Rd Writing partitionBy.Rd Writing rowsBetween.Rd Writing rangeBetween.Rd Writing windowPartitionBy.Rd Writing windowOrderBy.Rd Writing column_datetime_diff_functions.Rd Writing column_aggregate_functions.Rd Writing column_collection_functions.Rd Writing column_string_functions.Rd Writing avg.Rd Writing column_math_functions.Rd Writing column.Rd Writing column_misc_functions.Rd Writing column_window_functions.Rd Writing column_datetime_functions.Rd Writing last.Rd Writing not.Rd Writing fitted.Rd Writing predict.Rd Writing rbind.Rd Writing spark.als.Rd Writing spark.bisectingKmeans.Rd Writing spark.gaussianMixture.Rd Writing spark.gbt.Rd Writing spark.glm.Rd Writing spark.isoreg.Rd Writing spark.kmeans.Rd Writing spark.kstest.Rd Writing spark.lda.Rd Writing spark.logit.Rd Writing spark.mlp.Rd Writing spark.naiveBayes.Rd Writing spark.decisionTree.Rd Writing spark.randomForest.Rd Writing spark.survreg.Rd Writing spark.svmLinear.Rd Writing spark.fpGrowth.Rd Writing write.ml.Rd Writing awaitTermination.Rd Writing isActive.Rd Writing lastProgress.Rd Writing queryName.Rd Writing status.Rd Writing stopQuery.Rd Writing print.jobj.Rd Writing show.Rd Writing substr.Rd Writing match.Rd Writing GroupedData.Rd Writing pivot.Rd Writing SparkDataFrame.Rd Writing storageLevel.Rd Writing toJSON.Rd Writing nrow.Rd Writing ncol.Rd Writing dim.Rd Writing head.Rd Writing join.Rd Writing crossJoin.Rd Writing attach.Rd Writing str.Rd Writing histogram.Rd Writing getNumPartitions.Rd Writing sparkR.conf.Rd Writing sparkR.version.Rd Writing createDataFrame.Rd Writing read.json.Rd Writing read.orc.Rd Writing read.parquet.Rd Writing read.text.Rd Writing sql.Rd Writing tableToDF.Rd Writing read.df.Rd Writing read.jdbc.Rd Writing read.stream.Rd Writing WindowSpec.Rd Writing createExternalTable-deprecated.Rd Writing createTable.Rd Writing cacheTable.Rd Writing uncacheTable.Rd Writing clearCache.Rd Writing dropTempTable-deprecated.Rd Writing dropTempView.Rd Writing tables.Rd Writing tableNames.Rd Writing currentDatabase.Rd Writing setCurrentDatabase.Rd Writing listDatabases.Rd Writing listTables.Rd Writing listColumns.Rd Writing listFunctions.Rd Writing recoverPartitions.Rd Writing refreshTable.Rd Writing refreshByPath.Rd Writing spark.addFile.Rd Writing spark.getSparkFilesRootDirectory.Rd Writing spark.getSparkFiles.Rd Writing spark.lapply.Rd Writing setLogLevel.Rd Writing setCheckpointDir.Rd Writing install.spark.Rd Writing sparkR.callJMethod.Rd Writing sparkR.callJStatic.Rd Writing sparkR.newJObject.Rd Writing LinearSVCModel-class.Rd Writing LogisticRegressionModel-class.Rd Writing MultilayerPerceptronClassificationModel-class.Rd Writing NaiveBayesModel-class.Rd Writing BisectingKMeansModel-class.Rd Writing GaussianMixtureModel-class.Rd Writing KMeansModel-class.Rd Writing LDAModel-class.Rd Writing FPGrowthModel-class.Rd Writing ALSModel-class.Rd Writing AFTSurvivalRegressionModel-class.Rd Writing GeneralizedLinearRegressionModel-class.Rd Writing IsotonicRegressionModel-class.Rd Writing glm.Rd Writing KSTest-class.Rd Writing GBTRegressionModel-class.Rd Writing GBTClassificationModel-class.Rd Writing RandomForestRegressionModel-class.Rd Writing RandomForestClassificationModel-class.Rd Writing DecisionTreeRegressionModel-class.Rd Writing DecisionTreeClassificationModel-class.Rd Writing read.ml.Rd Writing sparkR.session.stop.Rd Writing sparkR.init-deprecated.Rd Writing sparkRSQL.init-deprecated.Rd Writing sparkRHive.init-deprecated.Rd Writing sparkR.session.Rd Writing sparkR.uiWebUrl.Rd Writing setJobGroup.Rd Writing clearJobGroup.Rd Writing cancelJobGroup.Rd Writing setJobDescription.Rd Writing setLocalProperty.Rd Writing getLocalProperty.Rd Writing crosstab.Rd Writing freqItems.Rd Writing approxQuantile.Rd Writing StreamingQuery.Rd Writing hashCode.Rd + /usr/bin/R CMD INSTALL --library=/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/lib /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/pkg/ * installing *source* package ‘SparkR’ ... ** using staged installation ** R ** inst ** byte-compile and prepare package for lazy loading Creating a new generic function for ‘as.data.frame’ in package ‘SparkR’ Creating a new generic function for ‘colnames’ in package ‘SparkR’ Creating a new generic function for ‘colnames<-’ in package ‘SparkR’ Creating a new generic function for ‘cov’ in package ‘SparkR’ Creating a new generic function for ‘drop’ in package ‘SparkR’ Creating a new generic function for ‘na.omit’ in package ‘SparkR’ Creating a new generic function for ‘filter’ in package ‘SparkR’ Creating a new generic function for ‘intersect’ in package ‘SparkR’ Creating a new generic function for ‘sample’ in package ‘SparkR’ Creating a new generic function for ‘transform’ in package ‘SparkR’ Creating a new generic function for ‘subset’ in package ‘SparkR’ Creating a new generic function for ‘summary’ in package ‘SparkR’ Creating a new generic function for ‘union’ in package ‘SparkR’ Creating a new generic function for ‘endsWith’ in package ‘SparkR’ Creating a new generic function for ‘startsWith’ in package ‘SparkR’ Creating a new generic function for ‘lag’ in package ‘SparkR’ Creating a new generic function for ‘rank’ in package ‘SparkR’ Creating a new generic function for ‘sd’ in package ‘SparkR’ Creating a new generic function for ‘var’ in package ‘SparkR’ Creating a new generic function for ‘window’ in package ‘SparkR’ Creating a new generic function for ‘predict’ in package ‘SparkR’ Creating a new generic function for ‘rbind’ in package ‘SparkR’ Creating a generic function for ‘substr’ from package ‘base’ in package ‘SparkR’ Creating a generic function for ‘%in%’ from package ‘base’ in package ‘SparkR’ Creating a generic function for ‘lapply’ from package ‘base’ in package ‘SparkR’ Creating a generic function for ‘Filter’ from package ‘base’ in package ‘SparkR’ Creating a generic function for ‘nrow’ from package ‘base’ in package ‘SparkR’ Creating a generic function for ‘ncol’ from package ‘base’ in package ‘SparkR’ Creating a generic function for ‘factorial’ from package ‘base’ in package ‘SparkR’ Creating a generic function for ‘atan2’ from package ‘base’ in package ‘SparkR’ Creating a generic function for ‘ifelse’ from package ‘base’ in package ‘SparkR’ ** help *** installing help indices ** building package indices ** installing vignettes ** testing if installed package can be loaded from temporary location ** testing if installed package can be loaded from final location ** testing if installed package keeps a record of temporary installation path * DONE (SparkR) + cd /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/lib + jar cfM /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/R/lib/sparkr.zip SparkR + popd [info] Using build tool sbt with Hadoop profile hadoop2.6 under environment amplab_jenkins [info] Found the following changed modules: root [info] Setup the following environment variables for tests: ======================================================================== Running Apache RAT checks ======================================================================== Attempting to fetch rat RAT checks passed. ======================================================================== Running Scala style checks ======================================================================== Scalastyle checks passed. ======================================================================== Running Python style checks ======================================================================== pycodestyle checks passed. rm -rf _build/* pydoc checks passed. ======================================================================== Running R style checks ======================================================================== Attaching package: ‘SparkR’ The following objects are masked from ‘package:stats’: cov, filter, lag, na.omit, predict, sd, var, window The following objects are masked from ‘package:base’: as.data.frame, colnames, colnames<-, drop, endsWith, intersect, rank, rbind, sample, startsWith, subset, summary, transform, union Attaching package: ‘testthat’ The following objects are masked from ‘package:SparkR’: describe, not lintr checks passed. ======================================================================== Running build tests ======================================================================== Using `mvn` from path: /home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/mvn Using `mvn` from path: /home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/mvn Performing Maven install for hadoop-2.6 Using `mvn` from path: /home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/mvn Performing Maven validate for hadoop-2.6 Using `mvn` from path: /home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/mvn Generating dependency manifest for hadoop-2.6 Using `mvn` from path: /home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/mvn Performing Maven install for hadoop-2.7 Using `mvn` from path: /home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/mvn Performing Maven validate for hadoop-2.7 Using `mvn` from path: /home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/mvn Generating dependency manifest for hadoop-2.7 Using `mvn` from path: /home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/mvn Performing Maven install for hadoop-3.1 Using `mvn` from path: /home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/mvn Performing Maven validate for hadoop-3.1 Using `mvn` from path: /home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/mvn Generating dependency manifest for hadoop-3.1 Using `mvn` from path: /home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/mvn Using `mvn` from path: /home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/mvn ======================================================================== Building Spark ======================================================================== [info] Building Spark (w/Hive 1.2.1) using SBT with these arguments: -Phadoop-2.6 -Pkubernetes -Phive-thriftserver -Pflume -Pkinesis-asl -Pyarn -Pkafka-0-8 -Pspark-ganglia-lgpl -Phive -Pmesos test:package streaming-kafka-0-8-assembly/assembly streaming-flume-assembly/assembly streaming-kinesis-asl-assembly/assembly Using /usr/lib/jvm/java-8-openjdk-amd64/ as default JAVA_HOME. Note, this will be overridden by -java-home if it is set. [info] Loading project definition from /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/project [info] Set current project to spark-parent (in build file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/) [info] Avro compiler using stringType=CharSequence [info] Compiling Avro IDL /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-sink/src/main/avro/sparkflume.avdl [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}tags... [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}tools... [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}spark... [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8-assembly/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/assembly/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/launcher/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-assembly/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/spark-ganglia-lgpl/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/unsafe/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/kvstore/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-assembly/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/launcher/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-yarn/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/examples/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-shuffle/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl-assembly/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/assembly/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/tags/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8-assembly/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-common/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/tools/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-yarn/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-assembly/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-common/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-assembly/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl-assembly/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/kvstore/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-shuffle/target [info] Done updating. [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/target/scala-2.11/spark-parent_2.11-2.4.8-SNAPSHOT.jar ... [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/tags/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/tools/target [info] Done packaging. [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/spark-ganglia-lgpl/target [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/target/scala-2.11/spark-parent_2.11-2.4.8-SNAPSHOT-tests.jar ... [info] Done packaging. SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-sink/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/unsafe/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-sink/target [info] Done updating. [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/target [info] Compiling 2 Scala sources and 6 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/tags/target/scala-2.11/classes... [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/target [info] Done updating. [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}mllib-local... [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-flume-sink... [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}unsafe... [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}sketch... [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}network-common... [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}kvstore... [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}launcher... [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/target [info] Compiling 1 Scala source to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/tools/target/scala-2.11/classes... [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib-local/target [info] Done updating. [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/target [info] Done updating. [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/kubernetes/core/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/kubernetes/core/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/graphx/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/yarn/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/yarn/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib-local/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/graphx/target [info] Done updating. [info] Done updating. [info] Done updating. [info] Compiling 78 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-common/target/scala-2.11/classes... [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}network-shuffle... [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive/target [info] Done updating. [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/tags/target/scala-2.11/spark-tags_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [info] Compiling 6 Scala sources and 3 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-sink/target/scala-2.11/classes... [info] Compiling 4 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/tags/target/scala-2.11/test-classes... [info] Compiling 16 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/unsafe/target/scala-2.11/classes... [info] Compiling 5 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib-local/target/scala-2.11/classes... [info] Done updating. [info] Compiling 12 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/kvstore/target/scala-2.11/classes... [info] Compiling 20 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/launcher/target/scala-2.11/classes... [info] Compiling 9 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/target/scala-2.11/classes... [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/examples/target [info] Done updating. [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}core... [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}network-yarn... [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/tags/target/scala-2.11/spark-tags_2.11-2.4.8-SNAPSHOT-tests.jar ... [info] Done packaging. [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/src/main/java/org/apache/spark/util/sketch/Platform.java:22: Unsafe is internal proprietary API and may be removed in a future release [warn] import sun.misc.Unsafe; [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/src/main/java/org/apache/spark/util/sketch/Platform.java:28: Unsafe is internal proprietary API and may be removed in a future release [warn] private static final Unsafe _UNSAFE; [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/src/main/java/org/apache/spark/util/sketch/Platform.java:150: Unsafe is internal proprietary API and may be removed in a future release [warn] sun.misc.Unsafe unsafe; [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/src/main/java/org/apache/spark/util/sketch/Platform.java:152: Unsafe is internal proprietary API and may be removed in a future release [warn] Field unsafeField = Unsafe.class.getDeclaredField("theUnsafe"); [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/src/main/java/org/apache/spark/util/sketch/Platform.java:154: Unsafe is internal proprietary API and may be removed in a future release [warn] unsafe = (sun.misc.Unsafe) unsafeField.get(null); [warn] ^ [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/streaming/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/streaming/target [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/target/scala-2.11/spark-sketch_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [info] Compiling 3 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/target/scala-2.11/test-classes... [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/kvstore/target/scala-2.11/spark-kvstore_2.11-2.4.8-SNAPSHOT.jar ... [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/unsafe/target/scala-2.11/spark-unsafe_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [info] Done packaging. [info] Compiling 1 Scala source and 5 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/unsafe/target/scala-2.11/test-classes... [info] Compiling 10 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/kvstore/target/scala-2.11/test-classes... [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/launcher/target/scala-2.11/spark-launcher_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [info] Compiling 7 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/launcher/target/scala-2.11/test-classes... [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/tools/target/scala-2.11/spark-tools_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/tools/target/scala-2.11/spark-tools_2.11-2.4.8-SNAPSHOT-tests.jar ... [info] Done packaging. [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-common/target/scala-2.11/spark-network-common_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [info] Compiling 24 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-shuffle/target/scala-2.11/classes... [info] Compiling 21 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-common/target/scala-2.11/test-classes... [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/launcher/target/scala-2.11/spark-launcher_2.11-2.4.8-SNAPSHOT-tests.jar ... [info] Done packaging. [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/kvstore/target/scala-2.11/spark-kvstore_2.11-2.4.8-SNAPSHOT-tests.jar ... [info] Done packaging. [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-shuffle/target/scala-2.11/spark-network-shuffle_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-common/src/test/java/org/apache/spark/network/server/OneForOneStreamManagerSuite.java:61: [unchecked] unchecked generic array creation for varargs parameter of type Class<? extends Throwable>[] [warn] Mockito.when(buffers.next()).thenThrow(RuntimeException.class); [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-common/src/test/java/org/apache/spark/network/server/OneForOneStreamManagerSuite.java:68: [unchecked] unchecked generic array creation for varargs parameter of type Class<? extends Throwable>[] [warn] Mockito.when(buffers2.next()).thenReturn(mockManagedBuffer).thenThrow(RuntimeException.class); [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-sink/target/scala-2.11/src_managed/main/compiled_avro/org/apache/spark/streaming/flume/sink/EventBatch.java:243: [unchecked] unchecked cast [warn] record.events = fieldSetFlags()[2] ? this.events : (java.util.List<org.apache.spark.streaming.flume.sink.SparkSinkEvent>) defaultValue(fields()[2]); [warn] ^ [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-sink/target/scala-2.11/spark-streaming-flume-sink_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [info] Compiling 1 Scala source to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-sink/target/scala-2.11/test-classes... [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-common/target/scala-2.11/spark-network-common_2.11-2.4.8-SNAPSHOT-tests.jar ... [info] Compiling 13 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-shuffle/target/scala-2.11/test-classes... [info] Done packaging. [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive/target [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/sketch/target/scala-2.11/spark-sketch_2.11-2.4.8-SNAPSHOT-tests.jar ... [info] Done packaging. [info] Done updating. [warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible: [warn] [warn] * io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final} [warn] +- org.apache.spark:spark-core_2.11:2.4.8-SNAPSHOT (depends on 3.9.9.Final) [warn] +- org.apache.zookeeper:zookeeper:3.4.6 (depends on 3.6.2.Final) [warn] +- org.apache.hadoop:hadoop-hdfs:2.6.5 (depends on 3.6.2.Final) [warn] [warn] Run 'evicted' to see detailed eviction warnings [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}catalyst... [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming... [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}mesos... [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}ganglia-lgpl... [info] Compiling 495 Scala sources and 81 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/classes... [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-sink/target/scala-2.11/spark-streaming-flume-sink_2.11-2.4.8-SNAPSHOT-tests.jar ... [info] Done packaging. [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-shuffle/target/scala-2.11/spark-network-shuffle_2.11-2.4.8-SNAPSHOT-tests.jar ... [info] Done packaging. [info] Done updating. [info] Compiling 2 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-yarn/target/scala-2.11/classes... [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/unsafe/target/scala-2.11/spark-unsafe_2.11-2.4.8-SNAPSHOT-tests.jar ... [info] Done packaging. [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-yarn/target/scala-2.11/spark-network-yarn_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/common/network-yarn/target/scala-2.11/spark-network-yarn_2.11-2.4.8-SNAPSHOT-tests.jar ... [info] Done packaging. [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib-local/target/scala-2.11/spark-mllib-local_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [info] Compiling 10 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib-local/target/scala-2.11/test-classes... [info] Done updating. [warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible: [warn] [warn] * io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final} [warn] +- org.apache.spark:spark-core_2.11:2.4.8-SNAPSHOT (depends on 3.9.9.Final) [warn] +- org.apache.zookeeper:zookeeper:3.4.6 (depends on 3.6.2.Final) [warn] +- org.apache.hadoop:hadoop-hdfs:2.6.5 (depends on 3.6.2.Final) [warn] [warn] Run 'evicted' to see detailed eviction warnings [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/catalyst/target [info] Done updating. [warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible: [warn] [warn] * io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final} [warn] +- org.apache.spark:spark-core_2.11:2.4.8-SNAPSHOT (depends on 3.9.9.Final) [warn] +- org.apache.zookeeper:zookeeper:3.4.6 (depends on 3.6.2.Final) [warn] +- org.apache.hadoop:hadoop-hdfs:2.6.5 (depends on 3.6.2.Final) [warn] [warn] Run 'evicted' to see detailed eviction warnings [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/catalyst/target [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib-local/target/scala-2.11/spark-mllib-local_2.11-2.4.8-SNAPSHOT-tests.jar ... [info] Done packaging. [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target [info] Done updating. [warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible: [warn] [warn] * io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final} [warn] +- org.apache.spark:spark-core_2.11:2.4.8-SNAPSHOT (depends on 3.9.9.Final) [warn] +- org.apache.zookeeper:zookeeper:3.4.6 (depends on 3.6.2.Final) [warn] +- org.apache.hadoop:hadoop-hdfs:2.6.5 (depends on 3.6.2.Final) [warn] [warn] Run 'evicted' to see detailed eviction warnings [info] Done updating. [warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible: [warn] [warn] * io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final} [warn] +- org.apache.spark:spark-core_2.11:2.4.8-SNAPSHOT (depends on 3.9.9.Final) [warn] +- org.apache.zookeeper:zookeeper:3.4.6 (depends on 3.6.2.Final) [warn] +- org.apache.hadoop:hadoop-hdfs:2.6.5 (depends on 3.6.2.Final) [warn] [warn] Run 'evicted' to see detailed eviction warnings [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}kubernetes... [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}graphx... [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-kafka-0-8... [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-kinesis-asl... [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-kafka-0-10... [success] created output: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/target [info] Done updating. [warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible: [warn] [warn] * io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final} [warn] +- org.apache.spark:spark-core_2.11:2.4.8-SNAPSHOT (depends on 3.9.9.Final) [warn] +- org.apache.zookeeper:zookeeper:3.4.6 (depends on 3.6.2.Final) [warn] +- org.apache.hadoop:hadoop-hdfs:2.6.5 (depends on 3.6.2.Final) [warn] [warn] Run 'evicted' to see detailed eviction warnings [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}sql... [info] Done updating. [warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible: [warn] [warn] * io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final} [warn] +- org.apache.spark:spark-core_2.11:2.4.8-SNAPSHOT (depends on 3.9.9.Final) [warn] +- org.apache.zookeeper:zookeeper:3.4.6 (depends on 3.6.2.Final) [warn] +- org.apache.hadoop:hadoop-hdfs:2.6.5 (depends on 3.6.2.Final) [warn] [warn] Run 'evicted' to see detailed eviction warnings [info] Done updating. [warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible: [warn] [warn] * io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final} [warn] +- org.apache.spark:spark-core_2.11:2.4.8-SNAPSHOT (depends on 3.9.9.Final) [warn] +- org.apache.zookeeper:zookeeper:3.4.6 (depends on 3.6.2.Final) [warn] +- org.apache.hadoop:hadoop-hdfs:2.6.5 (depends on 3.6.2.Final) [warn] [warn] Run 'evicted' to see detailed eviction warnings [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:161: method isRunningLocally in class TaskContext is deprecated: Local execution was removed, so this always returns false [warn] override def isRunningLocally(): Boolean = taskContext.isRunningLocally() [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/SSLOptions.scala:71: constructor SslContextFactory in class SslContextFactory is deprecated: see corresponding Javadoc for more information. [warn] val sslContextFactory = new SslContextFactory() [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/api/r/RBackend.scala:102: method childGroup in class ServerBootstrap is deprecated: see corresponding Javadoc for more information. [warn] if (bootstrap != null && bootstrap.childGroup() != null) { [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/scheduler/StageInfo.scala:59: value attemptId in class StageInfo is deprecated: Use attemptNumber instead [warn] def attemptNumber(): Int = attemptId [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2 [warn] param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] { [warn] ^ [info] Done updating. [warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible: [warn] [warn] * io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final} [warn] +- org.apache.spark:spark-core_2.11:2.4.8-SNAPSHOT (depends on 3.9.9.Final) [warn] +- org.apache.zookeeper:zookeeper:3.4.6 (depends on 3.6.2.Final) [warn] +- org.apache.hadoop:hadoop-hdfs:2.6.5 (depends on 3.6.2.Final) [warn] [warn] Run 'evicted' to see detailed eviction warnings [info] Done updating. [warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible: [warn] [warn] * io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final} [warn] +- org.apache.spark:spark-core_2.11:2.4.8-SNAPSHOT (depends on 3.9.9.Final) [warn] +- org.apache.zookeeper:zookeeper:3.4.6 (depends on 3.6.2.Final) [warn] +- org.apache.hadoop:hadoop-hdfs:2.6.5 (depends on 3.6.2.Final) [warn] [warn] Run 'evicted' to see detailed eviction warnings [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-flume... [info] Done updating. [warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible: [warn] [warn] * com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9 [warn] +- org.apache.arrow:arrow-vector:0.10.0 (depends on 3.0.2) [warn] +- org.apache.arrow:arrow-memory:0.10.0 (depends on 3.0.2) [warn] +- org.apache.spark:spark-unsafe_2.11:2.4.8-SNAPSHOT (depends on 1.3.9) [warn] +- org.apache.spark:spark-network-common_2.11:2.4.8-SNAPSHOT (depends on 1.3.9) [warn] +- org.apache.hadoop:hadoop-common:2.6.5 (depends on 1.3.9) [warn] +- org.apache.spark:spark-core_2.11:2.4.8-SNAPSHOT (depends on 1.3.9) [warn] [warn] * io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final} [warn] +- org.apache.spark:spark-core_2.11:2.4.8-SNAPSHOT (depends on 3.9.9.Final) [warn] +- org.apache.zookeeper:zookeeper:3.4.6 (depends on 3.6.2.Final) [warn] +- org.apache.hadoop:hadoop-hdfs:2.6.5 (depends on 3.6.2.Final) [warn] [warn] Run 'evicted' to see detailed eviction warnings [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-kinesis-asl-assembly... [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-kafka-0-8-assembly... [info] Done updating. [warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible: [warn] [warn] * io.netty:netty:3.9.9.Final is selected over {3.5.12.Final, 3.6.2.Final, 3.7.0.Final} [warn] +- org.apache.flume:flume-ng-core:1.6.0 (depends on 3.5.12.Final) [warn] +- org.apache.flume:flume-ng-sdk:1.6.0 (depends on 3.5.12.Final) [warn] +- org.apache.zookeeper:zookeeper:3.4.6 (depends on 3.5.12.Final) [warn] +- org.apache.spark:spark-core_2.11:2.4.8-SNAPSHOT (depends on 3.5.12.Final) [warn] +- org.apache.hadoop:hadoop-hdfs:2.6.5 (depends on 3.5.12.Final) [warn] [warn] Run 'evicted' to see detailed eviction warnings [info] Done updating. [warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible: [warn] [warn] * io.netty:netty:3.9.9.Final is selected over {3.7.0.Final, 3.6.2.Final} [warn] +- org.apache.spark:spark-core_2.11:2.4.8-SNAPSHOT (depends on 3.9.9.Final) [warn] +- org.apache.hadoop:hadoop-hdfs:2.6.5 (depends on 3.6.2.Final) [warn] +- org.apache.zookeeper:zookeeper:3.4.6 (depends on 3.7.0.Final) [warn] [warn] Run 'evicted' to see detailed eviction warnings [info] Done updating. [warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible: [warn] [warn] * io.netty:netty:3.9.9.Final is selected over {3.7.0.Final, 3.6.2.Final} [warn] +- org.apache.spark:spark-core_2.11:2.4.8-SNAPSHOT (depends on 3.9.9.Final) [warn] +- org.apache.hadoop:hadoop-hdfs:2.6.5 (depends on 3.6.2.Final) [warn] +- org.apache.zookeeper:zookeeper:3.4.6 (depends on 3.7.0.Final) [warn] [warn] Run 'evicted' to see detailed eviction warnings [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}yarn... [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}hive... [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}sql-kafka-0-10... [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}avro... [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-kafka-0-10-assembly... [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}streaming-flume-assembly... [info] Done updating. [warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible: [warn] [warn] * io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final} [warn] +- org.apache.spark:spark-core_2.11:2.4.8-SNAPSHOT (depends on 3.9.9.Final) [warn] +- org.apache.zookeeper:zookeeper:3.4.6 (depends on 3.6.2.Final) [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}mllib... [warn] +- org.apache.hadoop:hadoop-hdfs:2.6.5 (depends on 3.6.2.Final) [warn] [warn] Run 'evicted' to see detailed eviction warnings [info] Done updating. [warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible: [warn] [warn] * io.netty:netty:3.9.9.Final is selected over {3.7.0.Final, 3.6.2.Final} [warn] +- org.apache.spark:spark-core_2.11:2.4.8-SNAPSHOT (depends on 3.9.9.Final) [warn] +- org.apache.hadoop:hadoop-hdfs:2.6.5 (depends on 3.6.2.Final) [warn] +- org.apache.zookeeper:zookeeper:3.4.6 (depends on 3.7.0.Final) [warn] [warn] Run 'evicted' to see detailed eviction warnings [warn] 5 warnings found [info] Done updating. [warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible: [warn] [warn] * io.netty:netty:3.9.9.Final is selected over {3.5.12.Final, 3.6.2.Final, 3.7.0.Final} [warn] +- org.apache.flume:flume-ng-core:1.6.0 (depends on 3.5.12.Final) [warn] +- org.apache.flume:flume-ng-sdk:1.6.0 (depends on 3.5.12.Final) [warn] +- org.apache.zookeeper:zookeeper:3.4.6 (depends on 3.5.12.Final) [warn] +- org.apache.spark:spark-core_2.11:2.4.8-SNAPSHOT (depends on 3.5.12.Final) [warn] +- org.apache.hadoop:hadoop-hdfs:2.6.5 (depends on 3.5.12.Final) [warn] [warn] Run 'evicted' to see detailed eviction warnings [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/SSLOptions.scala:71: constructor SslContextFactory in class SslContextFactory is deprecated: see corresponding Javadoc for more information. [warn] val sslContextFactory = new SslContextFactory() [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2 [warn] param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2 [warn] param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:161: method isRunningLocally in class TaskContext is deprecated: Local execution was removed, so this always returns false [warn] override def isRunningLocally(): Boolean = taskContext.isRunningLocally() [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/scheduler/StageInfo.scala:59: value attemptId in class StageInfo is deprecated: Use attemptNumber instead [warn] def attemptNumber(): Int = attemptId [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/api/r/RBackend.scala:102: method childGroup in class ServerBootstrap is deprecated: see corresponding Javadoc for more information. [warn] if (bootstrap != null && bootstrap.childGroup() != null) { [warn] [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/spark-core_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [info] Compiling 37 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/kubernetes/core/target/scala-2.11/classes... [info] Compiling 1 Scala source to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/spark-ganglia-lgpl/target/scala-2.11/classes... [info] Compiling 38 Scala sources and 5 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/graphx/target/scala-2.11/classes... [info] Compiling 20 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/target/scala-2.11/classes... [info] Compiling 26 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/yarn/target/scala-2.11/classes... [info] Compiling 103 Scala sources and 6 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/streaming/target/scala-2.11/classes... [info] Compiling 240 Scala sources and 31 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/catalyst/target/scala-2.11/classes... [info] Compiling 240 Scala sources and 26 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/test-classes... [info] Done updating. [warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible: [warn] [warn] * com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9 [warn] +- org.apache.arrow:arrow-vector:0.10.0 (depends on 3.0.2) [warn] +- org.apache.arrow:arrow-memory:0.10.0 (depends on 3.0.2) [warn] +- org.apache.spark:spark-unsafe_2.11:2.4.8-SNAPSHOT (depends on 1.3.9) [warn] +- org.apache.spark:spark-network-common_2.11:2.4.8-SNAPSHOT (depends on 1.3.9) [warn] +- org.apache.hadoop:hadoop-common:2.6.5 (depends on 1.3.9) [warn] +- org.apache.spark:spark-core_2.11:2.4.8-SNAPSHOT (depends on 1.3.9) [warn] [warn] * io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final} [warn] +- org.apache.spark:spark-core_2.11:2.4.8-SNAPSHOT (depends on 3.9.9.Final) [warn] +- org.apache.zookeeper:zookeeper:3.4.6 (depends on 3.6.2.Final) [warn] +- org.apache.hadoop:hadoop-hdfs:2.6.5 (depends on 3.6.2.Final) [warn] [warn] Run 'evicted' to see detailed eviction warnings [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/spark-ganglia-lgpl/target/scala-2.11/spark-ganglia-lgpl_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/spark-ganglia-lgpl/target/scala-2.11/spark-ganglia-lgpl_2.11-2.4.8-SNAPSHOT-tests.jar ... [info] Done packaging. [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:87: method setRole in class Builder is deprecated: see corresponding Javadoc for more information. [warn] fwInfoBuilder.setRole(role) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:224: method setRole in class Builder is deprecated: see corresponding Javadoc for more information. [warn] role.foreach { r => builder.setRole(r) } [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:260: method getRole in class Resource is deprecated: see corresponding Javadoc for more information. [warn] Option(r.getRole), reservation) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:263: method getRole in class Resource is deprecated: see corresponding Javadoc for more information. [warn] Option(r.getRole), reservation) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:521: method setRole in class Builder is deprecated: see corresponding Javadoc for more information. [warn] role.foreach { r => builder.setRole(r) } [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:537: method getRole in class Resource is deprecated: see corresponding Javadoc for more information. [warn] (RoleResourceInfo(resource.getRole, reservation), [warn] ^ [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/kubernetes/core/target/scala-2.11/spark-kubernetes_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/yarn/target/scala-2.11/spark-yarn_2.11-2.4.8-SNAPSHOT.jar ... [warn] 6 warnings found [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:87: method setRole in class Builder is deprecated: see corresponding Javadoc for more information. [warn] fwInfoBuilder.setRole(role) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:224: method setRole in class Builder is deprecated: see corresponding Javadoc for more information. [warn] role.foreach { r => builder.setRole(r) } [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:260: method getRole in class Resource is deprecated: see corresponding Javadoc for more information. [warn] Option(r.getRole), reservation) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:263: method getRole in class Resource is deprecated: see corresponding Javadoc for more information. [warn] Option(r.getRole), reservation) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:521: method setRole in class Builder is deprecated: see corresponding Javadoc for more information. [warn] role.foreach { r => builder.setRole(r) } [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:537: method getRole in class Resource is deprecated: see corresponding Javadoc for more information. [warn] (RoleResourceInfo(resource.getRole, reservation), [warn] [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] Done packaging. [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/target/scala-2.11/spark-mesos_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [info] Done updating. [warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible: [warn] [warn] * com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9 [warn] +- org.apache.arrow:arrow-vector:0.10.0 (depends on 3.0.2) [warn] +- org.apache.arrow:arrow-memory:0.10.0 (depends on 3.0.2) [warn] +- org.apache.spark:spark-unsafe_2.11:2.4.8-SNAPSHOT (depends on 1.3.9) [warn] +- org.apache.spark:spark-network-common_2.11:2.4.8-SNAPSHOT (depends on 1.3.9) [warn] +- org.apache.hadoop:hadoop-common:2.6.5 (depends on 1.3.9) [warn] +- org.apache.spark:spark-core_2.11:2.4.8-SNAPSHOT (depends on 1.3.9) [warn] [warn] * io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final} [warn] +- org.apache.spark:spark-core_2.11:2.4.8-SNAPSHOT (depends on 3.9.9.Final) [warn] +- org.apache.zookeeper:zookeeper:3.4.6 (depends on 3.6.2.Final) [warn] +- org.apache.hadoop:hadoop-hdfs:2.6.5 (depends on 3.6.2.Final) [warn] [warn] Run 'evicted' to see detailed eviction warnings [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/graphx/target/scala-2.11/spark-graphx_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [info] Done updating. [warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible: [warn] [warn] * com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9 [warn] +- org.apache.arrow:arrow-vector:0.10.0 (depends on 3.0.2) [warn] +- org.apache.arrow:arrow-memory:0.10.0 (depends on 3.0.2) [warn] +- org.apache.spark:spark-unsafe_2.11:2.4.8-SNAPSHOT (depends on 1.3.9) [warn] +- org.apache.spark:spark-network-common_2.11:2.4.8-SNAPSHOT (depends on 1.3.9) [warn] +- org.apache.spark:spark-hive_2.11:2.4.8-SNAPSHOT (depends on 1.3.9) [warn] +- org.apache.hadoop:hadoop-common:2.6.5 (depends on 1.3.9) [warn] +- org.apache.spark:spark-core_2.11:2.4.8-SNAPSHOT (depends on 1.3.9) [warn] [warn] * io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final} [warn] +- org.apache.spark:spark-core_2.11:2.4.8-SNAPSHOT (depends on 3.9.9.Final) [warn] +- org.apache.zookeeper:zookeeper:3.4.6 (depends on 3.6.2.Final) [warn] +- org.apache.hadoop:hadoop-hdfs:2.6.5 (depends on 3.6.2.Final) [warn] [warn] Run 'evicted' to see detailed eviction warnings [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/streaming/target/scala-2.11/spark-streaming_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [info] Compiling 11 Scala sources and 1 Java source to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/target/scala-2.11/classes... [info] Compiling 10 Scala sources and 2 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/target/scala-2.11/classes... [info] Compiling 10 Scala sources and 1 Java source to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/target/scala-2.11/classes... [info] Compiling 11 Scala sources and 2 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/target/scala-2.11/classes... [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumeEventCount.scala:59: object FlumeUtils in package flume is deprecated: Deprecated without replacement [warn] val stream = FlumeUtils.createStream(ssc, host, port, StorageLevel.MEMORY_ONLY_SER_2) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement [warn] val stream = FlumeUtils.createPollingStream(ssc, host, port) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:259: object FlumeUtils in package flume is deprecated: Deprecated without replacement [warn] val dstream = FlumeUtils.createStream(jssc, hostname, port, storageLevel, enableDecompression) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:275: object FlumeUtils in package flume is deprecated: Deprecated without replacement [warn] val dstream = FlumeUtils.createPollingStream( [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:107: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information. [warn] val kinesisClient = new AmazonKinesisClient(credentials) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:108: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information. [warn] kinesisClient.setEndpoint(endpointUrl) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:223: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information. [warn] val kinesisClient = new AmazonKinesisClient(new DefaultAWSCredentialsProviderChain()) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:224: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information. [warn] kinesisClient.setEndpoint(endpoint) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:140: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information. [warn] private val client = new AmazonKinesisClient(credentials) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:150: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information. [warn] client.setEndpoint(endpointUrl) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:219: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information. [warn] getRecordsRequest.setRequestCredentials(credentials) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:240: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information. [warn] getShardIteratorRequest.setRequestCredentials(credentials) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisReceiver.scala:187: constructor Worker in class Worker is deprecated: see corresponding Javadoc for more information. [warn] worker = new Worker(recordProcessorFactory, kinesisClientLibConfiguration) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:58: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information. [warn] val client = new AmazonKinesisClient(KinesisTestUtils.getAWSCredentials()) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:59: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information. [warn] client.setEndpoint(endpointUrl) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:64: constructor AmazonDynamoDBClient in class AmazonDynamoDBClient is deprecated: see corresponding Javadoc for more information. [warn] val dynamoDBClient = new AmazonDynamoDBClient(new DefaultAWSCredentialsProviderChain()) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:65: method setRegion in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information. [warn] dynamoDBClient.setRegion(RegionUtils.getRegion(regionName)) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:606: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead [warn] KinesisUtils.createStream(jssc.ssc, kinesisAppName, streamName, endpointUrl, regionName, [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:613: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead [warn] KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName, [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:616: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead [warn] KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName, [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/SparkAWSCredentials.scala:76: method withLongLivedCredentialsProvider in class Builder is deprecated: see corresponding Javadoc for more information. [warn] .withLongLivedCredentialsProvider(longLivedCreds.provider) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:100: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information. [warn] consumer.poll(0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:153: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information. [warn] consumer.poll(0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/DirectKafkaInputDStream.scala:172: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information. [warn] val msgs = c.poll(0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaDataConsumer.scala:200: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information. [warn] val p = consumer.poll(timeout) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:89: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration [warn] protected val kc = new KafkaCluster(kafkaParams) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:172: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] OffsetRange(tp.topic, tp.partition, fo, uo.offset) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:217: object KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val leaders = KafkaCluster.checkErrors(kc.findLeaders(topics)) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:222: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] context.sparkContext, kafkaParams, b.map(OffsetRange(_)), leaders, messageHandler) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:58: trait HasOffsetRanges in package kafka is deprecated: Update to Kafka 0.10 integration [warn] ) extends RDD[R](sc, Nil) with Logging with HasOffsetRanges { [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val offsetRanges: Array[OffsetRange], [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:152: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val kc = new KafkaCluster(kafkaParams) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:268: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] OffsetRange(tp.topic, tp.partition, fo, uo.offset) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:633: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration [warn] KafkaUtils.createStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder]( [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:647: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] offsetRanges: JList[OffsetRange], [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:648: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration [warn] leaders: JMap[TopicAndPartition, Broker]): JavaRDD[(Array[Byte], Array[Byte])] = { [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:657: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] offsetRanges: JList[OffsetRange], [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:658: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration [warn] leaders: JMap[TopicAndPartition, Broker]): JavaRDD[Array[Byte]] = { [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:670: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] offsetRanges: JList[OffsetRange], [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:671: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration [warn] leaders: JMap[TopicAndPartition, Broker], [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:673: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration [warn] KafkaUtils.createRDD[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V]( [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:676: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] offsetRanges.toArray(new Array[OffsetRange](offsetRanges.size())), [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:720: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val kc = new KafkaCluster(Map(kafkaParams.asScala.toSeq: _*)) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:721: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration [warn] KafkaUtils.getFromOffsets( [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:725: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration [warn] KafkaUtils.createDirectStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V]( [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration [warn] def createBroker(host: String, port: JInt): Broker = Broker(host, port) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: object Broker in package kafka is deprecated: Update to Kafka 0.10 integration [warn] def createBroker(host: String, port: JInt): Broker = Broker(host, port) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:740: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] def offsetRangesOfKafkaRDD(rdd: RDD[_]): JList[OffsetRange] = { [warn] ^ [warn] four warnings found [warn] 17 warnings found [warn] four warnings found [warn] 25 warnings found [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:259: object FlumeUtils in package flume is deprecated: Deprecated without replacement [warn] val dstream = FlumeUtils.createStream(jssc, hostname, port, storageLevel, enableDecompression) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:275: object FlumeUtils in package flume is deprecated: Deprecated without replacement [warn] val dstream = FlumeUtils.createPollingStream( [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement [warn] val stream = FlumeUtils.createPollingStream(ssc, host, port) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement [warn] val stream = FlumeUtils.createPollingStream(ssc, host, port) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumeEventCount.scala:59: object FlumeUtils in package flume is deprecated: Deprecated without replacement [warn] val stream = FlumeUtils.createStream(ssc, host, port, StorageLevel.MEMORY_ONLY_SER_2) [warn] [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/target/scala-2.11/spark-streaming-flume_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-assembly/target/scala-2.11/spark-streaming-flume-assembly_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-assembly/target/scala-2.11/spark-streaming-flume-assembly_2.11-2.4.8-SNAPSHOT-tests.jar ... [info] Done packaging. [info] Done updating. [warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible: [warn] [warn] * com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9 [warn] +- org.apache.arrow:arrow-vector:0.10.0 (depends on 3.0.2) [warn] +- org.apache.arrow:arrow-memory:0.10.0 (depends on 3.0.2) [warn] +- org.apache.spark:spark-unsafe_2.11:2.4.8-SNAPSHOT (depends on 1.3.9) [warn] +- org.apache.spark:spark-network-common_2.11:2.4.8-SNAPSHOT (depends on 1.3.9) [warn] +- org.apache.hadoop:hadoop-common:2.6.5 (depends on 1.3.9) [warn] +- org.apache.spark:spark-core_2.11:2.4.8-SNAPSHOT (depends on 1.3.9) [warn] [warn] * io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final} [warn] +- org.apache.spark:spark-core_2.11:2.4.8-SNAPSHOT (depends on 3.9.9.Final) [warn] +- org.apache.zookeeper:zookeeper:3.4.6 (depends on 3.6.2.Final) [warn] +- org.apache.hadoop:hadoop-hdfs:2.6.5 (depends on 3.6.2.Final) [warn] [warn] Run 'evicted' to see detailed eviction warnings [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}hive-thriftserver... [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}examples... [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}repl... [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/DirectKafkaInputDStream.scala:172: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information. [warn] val msgs = c.poll(0) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:100: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information. [warn] consumer.poll(0) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:153: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information. [warn] consumer.poll(0) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaDataConsumer.scala:200: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information. [warn] val p = consumer.poll(timeout) [warn] [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/target/scala-2.11/spark-streaming-kafka-0-10_2.11-2.4.8-SNAPSHOT.jar ... [info] Note: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/java/org/apache/spark/examples/streaming/JavaKinesisWordCountASL.java uses or overrides a deprecated API. [info] Note: Recompile with -Xlint:deprecation for details. [info] Done packaging. [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-assembly/target/scala-2.11/spark-streaming-kafka-0-10-assembly_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-assembly/target/scala-2.11/spark-streaming-kafka-0-10-assembly_2.11-2.4.8-SNAPSHOT-tests.jar ... [info] Done packaging. [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:140: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information. [warn] private val client = new AmazonKinesisClient(credentials) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:150: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information. [warn] client.setEndpoint(endpointUrl) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:219: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information. [warn] getRecordsRequest.setRequestCredentials(credentials) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:240: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information. [warn] getShardIteratorRequest.setRequestCredentials(credentials) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:606: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead [warn] KinesisUtils.createStream(jssc.ssc, kinesisAppName, streamName, endpointUrl, regionName, [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:613: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead [warn] KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName, [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:616: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead [warn] KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName, [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:107: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information. [warn] val kinesisClient = new AmazonKinesisClient(credentials) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:108: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information. [warn] kinesisClient.setEndpoint(endpointUrl) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:223: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information. [warn] val kinesisClient = new AmazonKinesisClient(new DefaultAWSCredentialsProviderChain()) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:224: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information. [warn] kinesisClient.setEndpoint(endpoint) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/SparkAWSCredentials.scala:76: method withLongLivedCredentialsProvider in class Builder is deprecated: see corresponding Javadoc for more information. [warn] .withLongLivedCredentialsProvider(longLivedCreds.provider) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:58: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information. [warn] val client = new AmazonKinesisClient(KinesisTestUtils.getAWSCredentials()) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:59: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information. [warn] client.setEndpoint(endpointUrl) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:64: constructor AmazonDynamoDBClient in class AmazonDynamoDBClient is deprecated: see corresponding Javadoc for more information. [warn] val dynamoDBClient = new AmazonDynamoDBClient(new DefaultAWSCredentialsProviderChain()) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:65: method setRegion in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information. [warn] dynamoDBClient.setRegion(RegionUtils.getRegion(regionName)) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisReceiver.scala:187: constructor Worker in class Worker is deprecated: see corresponding Javadoc for more information. [warn] worker = new Worker(recordProcessorFactory, kinesisClientLibConfiguration) [warn] [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/target/scala-2.11/spark-streaming-kinesis-asl_2.11-2.4.8-SNAPSHOT.jar ... [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:633: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration [warn] KafkaUtils.createStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder]( [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:647: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] offsetRanges: JList[OffsetRange], [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:648: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration [warn] leaders: JMap[TopicAndPartition, Broker]): JavaRDD[(Array[Byte], Array[Byte])] = { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:657: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] offsetRanges: JList[OffsetRange], [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:658: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration [warn] leaders: JMap[TopicAndPartition, Broker]): JavaRDD[Array[Byte]] = { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:670: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] offsetRanges: JList[OffsetRange], [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:671: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration [warn] leaders: JMap[TopicAndPartition, Broker], [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:673: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration [warn] KafkaUtils.createRDD[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V]( [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:676: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] offsetRanges.toArray(new Array[OffsetRange](offsetRanges.size())), [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:720: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val kc = new KafkaCluster(Map(kafkaParams.asScala.toSeq: _*)) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:721: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration [warn] KafkaUtils.getFromOffsets( [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:725: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration [warn] KafkaUtils.createDirectStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V]( [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration [warn] def createBroker(host: String, port: JInt): Broker = Broker(host, port) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: object Broker in package kafka is deprecated: Update to Kafka 0.10 integration [warn] def createBroker(host: String, port: JInt): Broker = Broker(host, port) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:740: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] def offsetRangesOfKafkaRDD(rdd: RDD[_]): JList[OffsetRange] = { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:89: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration [warn] protected val kc = new KafkaCluster(kafkaParams) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:172: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] OffsetRange(tp.topic, tp.partition, fo, uo.offset) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:217: object KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val leaders = KafkaCluster.checkErrors(kc.findLeaders(topics)) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:222: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] context.sparkContext, kafkaParams, b.map(OffsetRange(_)), leaders, messageHandler) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:58: trait HasOffsetRanges in package kafka is deprecated: Update to Kafka 0.10 integration [warn] ) extends RDD[R](sc, Nil) with Logging with HasOffsetRanges { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val offsetRanges: Array[OffsetRange], [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val offsetRanges: Array[OffsetRange], [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:152: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val kc = new KafkaCluster(kafkaParams) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:268: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] OffsetRange(tp.topic, tp.partition, fo, uo.offset) [warn] [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/target/scala-2.11/spark-streaming-kafka-0-8_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl-assembly/target/scala-2.11/spark-streaming-kinesis-asl-assembly_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [info] Done packaging. [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl-assembly/target/scala-2.11/spark-streaming-kinesis-asl-assembly_2.11-2.4.8-SNAPSHOT-tests.jar ... [info] Done packaging. [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8-assembly/target/scala-2.11/spark-streaming-kafka-0-8-assembly_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8-assembly/target/scala-2.11/spark-streaming-kafka-0-8-assembly_2.11-2.4.8-SNAPSHOT-tests.jar ... [info] Done packaging. [info] Done updating. [warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible: [warn] [warn] * com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9 [warn] +- org.apache.arrow:arrow-vector:0.10.0 (depends on 3.0.2) [warn] +- org.apache.arrow:arrow-memory:0.10.0 (depends on 3.0.2) [warn] +- org.apache.spark:spark-unsafe_2.11:2.4.8-SNAPSHOT (depends on 1.3.9) [warn] +- org.apache.spark:spark-network-common_2.11:2.4.8-SNAPSHOT (depends on 1.3.9) [warn] +- org.apache.spark:spark-hive_2.11:2.4.8-SNAPSHOT (depends on 1.3.9) [warn] +- org.apache.hadoop:hadoop-common:2.6.5 (depends on 1.3.9) [warn] +- org.apache.spark:spark-core_2.11:2.4.8-SNAPSHOT (depends on 1.3.9) [warn] [warn] * io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final} [warn] +- org.apache.spark:spark-core_2.11:2.4.8-SNAPSHOT (depends on 3.9.9.Final) [warn] +- org.apache.zookeeper:zookeeper:3.4.6 (depends on 3.6.2.Final) [warn] +- org.apache.hadoop:hadoop-hdfs:2.6.5 (depends on 3.6.2.Final) [warn] [warn] Run 'evicted' to see detailed eviction warnings [info] Done updating. [warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible: [warn] [warn] * com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9 [warn] +- org.apache.arrow:arrow-vector:0.10.0 (depends on 3.0.2) [warn] +- org.apache.arrow:arrow-memory:0.10.0 (depends on 3.0.2) [warn] +- org.apache.spark:spark-unsafe_2.11:2.4.8-SNAPSHOT (depends on 1.3.9) [warn] +- org.apache.spark:spark-network-common_2.11:2.4.8-SNAPSHOT (depends on 1.3.9) [warn] +- org.apache.spark:spark-hive_2.11:2.4.8-SNAPSHOT (depends on 1.3.9) [warn] +- org.apache.hadoop:hadoop-common:2.6.5 (depends on 1.3.9) [warn] +- org.apache.spark:spark-core_2.11:2.4.8-SNAPSHOT (depends on 1.3.9) [warn] [warn] * io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final} [warn] +- org.apache.spark:spark-core_2.11:2.4.8-SNAPSHOT (depends on 3.9.9.Final) [warn] +- org.apache.zookeeper:zookeeper:3.4.6 (depends on 3.6.2.Final) [warn] +- org.apache.hadoop:hadoop-hdfs:2.6.5 (depends on 3.6.2.Final) [warn] [warn] Run 'evicted' to see detailed eviction warnings [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/util/collection/ExternalAppendOnlyMapSuite.scala:461: postfix operator seconds should be enabled [warn] by making the implicit value scala.language.postfixOps visible. [warn] This can be achieved by adding the import clause 'import scala.language.postfixOps' [warn] or by setting the compiler option -language:postfixOps. [warn] See the Scaladoc for value scala.language.postfixOps for a discussion [warn] why the feature should be explicitly enabled. [warn] eventually(timeout(5 seconds), interval(200 milliseconds)) { [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/util/collection/ExternalAppendOnlyMapSuite.scala:461: postfix operator milliseconds should be enabled [warn] by making the implicit value scala.language.postfixOps visible. [warn] eventually(timeout(5 seconds), interval(200 milliseconds)) { [warn] ^ [info] Done updating. [warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible: [warn] [warn] * com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9 [warn] +- org.apache.arrow:arrow-vector:0.10.0 (depends on 3.0.2) [warn] +- org.apache.arrow:arrow-memory:0.10.0 (depends on 3.0.2) [warn] +- org.apache.spark:spark-unsafe_2.11:2.4.8-SNAPSHOT (depends on 1.3.9) [warn] +- org.apache.spark:spark-network-common_2.11:2.4.8-SNAPSHOT (depends on 1.3.9) [warn] +- org.apache.hadoop:hadoop-common:2.6.5 (depends on 1.3.9) [warn] +- org.apache.spark:spark-core_2.11:2.4.8-SNAPSHOT (depends on 1.3.9) [warn] [warn] * org.scala-lang.modules:scala-parser-combinators_2.11:1.1.0 is selected over 1.0.4 [warn] +- org.apache.spark:spark-mllib_2.11:2.4.8-SNAPSHOT (depends on 1.1.0) [warn] +- org.apache.spark:spark-catalyst_2.11:2.4.8-SNAPSHOT (depends on 1.1.0) [warn] +- org.scala-lang:scala-compiler:2.11.12 (depends on 1.0.4) [warn] [warn] * io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final} [warn] +- org.apache.spark:spark-core_2.11:2.4.8-SNAPSHOT (depends on 3.9.9.Final) [warn] +- org.apache.zookeeper:zookeeper:3.4.6 (depends on 3.6.2.Final) [warn] +- org.apache.hadoop:hadoop-hdfs:2.6.5 (depends on 3.6.2.Final) [warn] [warn] Run 'evicted' to see detailed eviction warnings [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}assembly... [info] Done updating. [warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible: [warn] [warn] * com.google.code.findbugs:jsr305:3.0.2 is selected over 1.3.9 [warn] +- org.apache.arrow:arrow-vector:0.10.0 (depends on 3.0.2) [warn] +- org.apache.arrow:arrow-memory:0.10.0 (depends on 3.0.2) [warn] +- org.apache.spark:spark-unsafe_2.11:2.4.8-SNAPSHOT (depends on 1.3.9) [warn] +- org.apache.spark:spark-network-common_2.11:2.4.8-SNAPSHOT (depends on 1.3.9) [warn] +- org.apache.spark:spark-hive_2.11:2.4.8-SNAPSHOT (depends on 1.3.9) [warn] +- org.apache.hadoop:hadoop-common:2.6.5 (depends on 1.3.9) [warn] +- org.apache.spark:spark-core_2.11:2.4.8-SNAPSHOT (depends on 1.3.9) [warn] [warn] * org.scala-lang.modules:scala-parser-combinators_2.11:1.1.0 is selected over 1.0.4 [warn] +- org.scala-lang:scala-compiler:2.11.12 (depends on 1.0.4) [warn] +- org.apache.spark:spark-mllib_2.11:2.4.8-SNAPSHOT (depends on 1.0.4) [warn] +- org.apache.spark:spark-catalyst_2.11:2.4.8-SNAPSHOT (depends on 1.0.4) [warn] [warn] * io.netty:netty:3.9.9.Final is selected over {3.6.2.Final, 3.7.0.Final} [warn] +- org.apache.spark:spark-core_2.11:2.4.8-SNAPSHOT (depends on 3.9.9.Final) [warn] +- org.apache.zookeeper:zookeeper:3.4.6 (depends on 3.6.2.Final) [warn] +- org.apache.hadoop:hadoop-hdfs:2.6.5 (depends on 3.6.2.Final) [warn] [warn] Run 'evicted' to see detailed eviction warnings [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:48: trait AccumulableParam in package spark is deprecated: use AccumulatorV2 [warn] implicit def setAccum[A]: AccumulableParam[mutable.Set[A], A] = [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:49: trait AccumulableParam in package spark is deprecated: use AccumulatorV2 [warn] new AccumulableParam[mutable.Set[A], A] { [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:86: class Accumulator in package spark is deprecated: use AccumulatorV2 [warn] val acc: Accumulator[Int] = sc.accumulator(0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:86: method accumulator in class SparkContext is deprecated: use AccumulatorV2 [warn] val acc: Accumulator[Int] = sc.accumulator(0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:86: object IntAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2 [warn] val acc: Accumulator[Int] = sc.accumulator(0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:92: method accumulator in class SparkContext is deprecated: use AccumulatorV2 [warn] val longAcc = sc.accumulator(0L) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:92: object LongAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2 [warn] val longAcc = sc.accumulator(0L) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:100: class Accumulator in package spark is deprecated: use AccumulatorV2 [warn] val acc: Accumulator[Int] = sc.accumulator(0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:100: method accumulator in class SparkContext is deprecated: use AccumulatorV2 [warn] val acc: Accumulator[Int] = sc.accumulator(0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:100: object IntAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2 [warn] val acc: Accumulator[Int] = sc.accumulator(0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:112: class Accumulable in package spark is deprecated: use AccumulatorV2 [warn] val acc: Accumulable[mutable.Set[Any], Any] = sc.accumulable(new mutable.HashSet[Any]()) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:112: method accumulable in class SparkContext is deprecated: use AccumulatorV2 [warn] val acc: Accumulable[mutable.Set[Any], Any] = sc.accumulable(new mutable.HashSet[Any]()) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:129: class Accumulable in package spark is deprecated: use AccumulatorV2 [warn] val acc: Accumulable[mutable.Set[Any], Any] = sc.accumulable(new mutable.HashSet[Any]()) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:129: method accumulable in class SparkContext is deprecated: use AccumulatorV2 [warn] val acc: Accumulable[mutable.Set[Any], Any] = sc.accumulable(new mutable.HashSet[Any]()) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:145: method accumulableCollection in class SparkContext is deprecated: use AccumulatorV2 [warn] val setAcc = sc.accumulableCollection(mutable.HashSet[Int]()) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:146: method accumulableCollection in class SparkContext is deprecated: use AccumulatorV2 [warn] val bufferAcc = sc.accumulableCollection(mutable.ArrayBuffer[Int]()) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:147: method accumulableCollection in class SparkContext is deprecated: use AccumulatorV2 [warn] val mapAcc = sc.accumulableCollection(mutable.HashMap[Int, String]()) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:170: class Accumulable in package spark is deprecated: use AccumulatorV2 [warn] val acc: Accumulable[mutable.Set[Any], Any] = sc.accumulable(new mutable.HashSet[Any]()) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:170: method accumulable in class SparkContext is deprecated: use AccumulatorV2 [warn] val acc: Accumulable[mutable.Set[Any], Any] = sc.accumulable(new mutable.HashSet[Any]()) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:184: class Accumulable in package spark is deprecated: use AccumulatorV2 [warn] var acc: Accumulable[mutable.Set[Any], Any] = sc.accumulable(new mutable.HashSet[Any]()) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:184: method accumulable in class SparkContext is deprecated: use AccumulatorV2 [warn] var acc: Accumulable[mutable.Set[Any], Any] = sc.accumulable(new mutable.HashSet[Any]()) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:225: class Accumulator in package spark is deprecated: use AccumulatorV2 [warn] val acc = new Accumulator("", StringAccumulatorParam, Some("darkness")) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/AccumulatorSuite.scala:225: object StringAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2 [warn] val acc = new Accumulator("", StringAccumulatorParam, Some("darkness")) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/status/AppStatusListenerSuite.scala:1194: value attemptId in class StageInfo is deprecated: Use attemptNumber instead [warn] SparkListenerTaskEnd(stage1.stageId, stage1.attemptId, "taskType", Success, tasks(1), null)) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/status/AppStatusListenerSuite.scala:1198: value attemptId in class StageInfo is deprecated: Use attemptNumber instead [warn] SparkListenerTaskEnd(stage1.stageId, stage1.attemptId, "taskType", Success, tasks(0), null)) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/status/AppStatusListenerSuite.scala:1264: value attemptId in class StageInfo is deprecated: Use attemptNumber instead [warn] SparkListenerTaskEnd(stage1.stageId, stage1.attemptId, "taskType", Success, tasks(0), null)) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/status/AppStatusListenerSuite.scala:1278: value attemptId in class StageInfo is deprecated: Use attemptNumber instead [warn] SparkListenerTaskEnd(stage1.stageId, stage1.attemptId, "taskType", [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/util/AccumulatorV2Suite.scala:132: object StringAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2 [warn] val acc = new LegacyAccumulatorWrapper("default", AccumulatorParam.StringAccumulatorParam) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/util/AccumulatorV2Suite.scala:132: object AccumulatorParam in package spark is deprecated: use AccumulatorV2 [warn] val acc = new LegacyAccumulatorWrapper("default", AccumulatorParam.StringAccumulatorParam) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/spark/util/AccumulatorV2Suite.scala:168: trait AccumulatorParam in package spark is deprecated: use AccumulatorV2 [warn] val param = new AccumulatorParam[MyData] { [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/sparktest/ImplicitSuite.scala:79: method accumulator in class SparkContext is deprecated: use AccumulatorV2 [warn] sc.accumulator(123.4) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/sparktest/ImplicitSuite.scala:79: object DoubleAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2 [warn] sc.accumulator(123.4) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/sparktest/ImplicitSuite.scala:84: method accumulator in class SparkContext is deprecated: use AccumulatorV2 [warn] sc.accumulator(123) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/sparktest/ImplicitSuite.scala:84: object IntAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2 [warn] sc.accumulator(123) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/sparktest/ImplicitSuite.scala:89: method accumulator in class SparkContext is deprecated: use AccumulatorV2 [warn] sc.accumulator(123L) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/sparktest/ImplicitSuite.scala:89: object LongAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2 [warn] sc.accumulator(123L) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/sparktest/ImplicitSuite.scala:94: method accumulator in class SparkContext is deprecated: use AccumulatorV2 [warn] sc.accumulator(123F) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/scala/org/apache/sparktest/ImplicitSuite.scala:94: object FloatAccumulatorParam in object AccumulatorParam is deprecated: use AccumulatorV2 [warn] sc.accumulator(123F) [warn] ^ [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/catalyst/target/scala-2.11/spark-catalyst_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [info] Compiling 347 Scala sources and 93 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/target/scala-2.11/classes... [warn] 40 warnings found [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:128: value ENABLE_JOB_SUMMARY in object ParquetOutputFormat is deprecated: see corresponding Javadoc for more information. [warn] && conf.get(ParquetOutputFormat.ENABLE_JOB_SUMMARY) == null) { [warn] ^ [info] Note: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/test/java/test/org/apache/spark/JavaAPISuite.java uses or overrides a deprecated API. [info] Note: Recompile with -Xlint:deprecation for details. [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] Compiling 10 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/target/scala-2.11/test-classes... [info] Compiling 5 Scala sources and 3 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/target/scala-2.11/test-classes... [info] Compiling 23 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/yarn/target/scala-2.11/test-classes... [info] Compiling 28 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/kubernetes/core/target/scala-2.11/test-classes... [info] Compiling 40 Scala sources and 9 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/streaming/target/scala-2.11/test-classes... [info] Compiling 3 Scala sources and 3 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/target/scala-2.11/test-classes... [info] Compiling 6 Scala sources and 4 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/target/scala-2.11/test-classes... [info] Compiling 201 Scala sources and 5 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/catalyst/target/scala-2.11/test-classes... [info] Compiling 20 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/graphx/target/scala-2.11/test-classes... [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/spark-core_2.11-2.4.8-SNAPSHOT-tests.jar ... [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/test/scala/org/apache/spark/streaming/flume/FlumePollingStreamSuite.scala:117: object FlumeUtils in package flume is deprecated: Deprecated without replacement [warn] FlumeUtils.createPollingStream(ssc, addresses, StorageLevel.MEMORY_AND_DISK, [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/test/scala/org/apache/spark/streaming/flume/FlumeStreamSuite.scala:83: object FlumeUtils in package flume is deprecated: Deprecated without replacement [warn] val flumeStream = FlumeUtils.createStream( [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:96: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration [warn] KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder]( [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:103: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] var offsetRanges = Array[OffsetRange]() [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:107: trait HasOffsetRanges in package kafka is deprecated: Update to Kafka 0.10 integration [warn] offsetRanges = rdd.asInstanceOf[HasOffsetRanges].offsetRanges [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:148: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val kc = new KafkaCluster(kafkaParams) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:163: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration [warn] KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder]( [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:194: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val kc = new KafkaCluster(kafkaParams) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:209: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration [warn] KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder, String]( [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:251: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration [warn] KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder]( [warn] ^ [info] Done packaging. [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:340: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration [warn] KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder]( [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:414: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val kc = new KafkaCluster(kafkaParams) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:494: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val kc = new KafkaCluster(kafkaParams) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/DirectKafkaStreamSuite.scala:565: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] kafkaStream: DStream[(K, V)]): Seq[(Time, Array[OffsetRange])] = { [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaClusterSuite.scala:30: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration [warn] private var kc: KafkaCluster = null [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaClusterSuite.scala:40: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration [warn] kc = new KafkaCluster(Map("metadata.broker.list" -> kafkaTestUtils.brokerAddress)) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:64: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val offsetRanges = Array(OffsetRange(topic, 0, 0, messages.size)) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:66: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val rdd = KafkaUtils.createRDD[String, String, StringDecoder, StringDecoder]( [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:80: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val emptyRdd = KafkaUtils.createRDD[String, String, StringDecoder, StringDecoder]( [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:81: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] sc, kafkaParams, Array(OffsetRange(topic, 0, 0, 0))) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:86: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val badRanges = Array(OffsetRange(topic, 0, 0, messages.size + 1)) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:88: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration [warn] KafkaUtils.createRDD[String, String, StringDecoder, StringDecoder]( [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:102: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val kc = new KafkaCluster(kafkaParams) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:113: trait HasOffsetRanges in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val ranges = rdd.get.asInstanceOf[HasOffsetRanges].offsetRanges [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:148: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration [warn] private def getRdd(kc: KafkaCluster, topics: Set[String]) = { [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:161: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] OffsetRange(tp.topic, tp.partition, fromOffset, until(tp).offset) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:165: object Broker in package kafka is deprecated: Update to Kafka 0.10 integration [warn] tp -> Broker(lo.host, lo.port) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaRDDSuite.scala:168: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration [warn] KafkaUtils.createRDD[String, String, StringDecoder, StringDecoder, String]( [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/KafkaStreamSuite.scala:66: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val stream = KafkaUtils.createStream[String, String, StringDecoder, StringDecoder]( [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/ReliableKafkaStreamSuite.scala:96: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val stream = KafkaUtils.createStream[String, String, StringDecoder, StringDecoder]( [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/test/scala/org/apache/spark/streaming/kafka/ReliableKafkaStreamSuite.scala:130: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val stream = KafkaUtils.createStream[String, String, StringDecoder, StringDecoder]( [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/kubernetes/core/src/test/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorPodsAllocatorSuite.scala:168: non-variable type argument org.apache.spark.deploy.k8s.KubernetesExecutorSpecificConf in type org.apache.spark.deploy.k8s.KubernetesConf[org.apache.spark.deploy.k8s.KubernetesExecutorSpecificConf] is unchecked since it is eliminated by erasure [warn] if (!argument.isInstanceOf[KubernetesConf[KubernetesExecutorSpecificConf]]) { [warn] ^ [warn] two warnings found [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/test/scala/org/apache/spark/streaming/kafka010/DirectKafkaStreamSuite.scala:253: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information. [warn] s.consumer.poll(0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/test/scala/org/apache/spark/streaming/kafka010/DirectKafkaStreamSuite.scala:309: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information. [warn] s.consumer.poll(0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/test/scala/org/apache/spark/streaming/kafka010/DirectKafkaStreamSuite.scala:473: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information. [warn] consumer.poll(0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/test/scala/org/apache/spark/streaming/kafka010/KafkaTestUtils.scala:60: class ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead. [warn] private var zkUtils: ZkUtils = _ [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/test/scala/org/apache/spark/streaming/kafka010/KafkaTestUtils.scala:88: class ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead. [warn] def zookeeperClient: ZkUtils = { [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/test/scala/org/apache/spark/streaming/kafka010/KafkaTestUtils.scala:100: object ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead. [warn] zkUtils = ZkUtils(s"$zkHost:$zkPort", zkSessionTimeout, zkConnectionTimeout, false) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/test/scala/org/apache/spark/streaming/kafka010/KafkaTestUtils.scala:178: method createTopic in object AdminUtils is deprecated: This method is deprecated and will be replaced by kafka.zk.AdminZkClient. [warn] AdminUtils.createTopic(zkUtils, topic, partitions, 1, config) [warn] ^ [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/target/scala-2.11/spark-streaming-flume_2.11-2.4.8-SNAPSHOT-tests.jar ... [info] Done packaging. [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:113: method setRole in class Builder is deprecated: see corresponding Javadoc for more information. [warn] Resource.newBuilder().setRole("*") [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:116: method setRole in class Builder is deprecated: see corresponding Javadoc for more information. [warn] Resource.newBuilder().setRole("*") [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:121: method setRole in class Builder is deprecated: see corresponding Javadoc for more information. [warn] Resource.newBuilder().setRole("role2") [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:124: method setRole in class Builder is deprecated: see corresponding Javadoc for more information. [warn] Resource.newBuilder().setRole("role2") [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:138: method valueOf in object Status is deprecated: see corresponding Javadoc for more information. [warn] ).thenReturn(Status.valueOf(1)) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:151: method getRole in class Resource is deprecated: see corresponding Javadoc for more information. [warn] assert(cpus.exists(_.getRole() == "role2")) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:152: method getRole in class Resource is deprecated: see corresponding Javadoc for more information. [warn] assert(cpus.exists(_.getRole() == "*")) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:155: method getRole in class Resource is deprecated: see corresponding Javadoc for more information. [warn] assert(mem.exists(_.getRole() == "role2")) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:156: method getRole in class Resource is deprecated: see corresponding Javadoc for more information. [warn] assert(mem.exists(_.getRole() == "*")) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:417: method setRole in class Builder is deprecated: see corresponding Javadoc for more information. [warn] Resource.newBuilder().setRole("*") [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterSchedulerSuite.scala:420: method setRole in class Builder is deprecated: see corresponding Javadoc for more information. [warn] Resource.newBuilder().setRole("*") [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:271: method valueOf in object Status is deprecated: see corresponding Javadoc for more information. [warn] ).thenReturn(Status.valueOf(1)) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:272: method valueOf in object Status is deprecated: see corresponding Javadoc for more information. [warn] when(driver.declineOffer(mesosOffers.get(1).getId)).thenReturn(Status.valueOf(1)) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:273: method valueOf in object Status is deprecated: see corresponding Javadoc for more information. [warn] when(driver.declineOffer(mesosOffers.get(2).getId)).thenReturn(Status.valueOf(1)) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:299: method valueOf in object Status is deprecated: see corresponding Javadoc for more information. [warn] when(driver.declineOffer(mesosOffers2.get(0).getId)).thenReturn(Status.valueOf(1)) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:325: method setRole in class Builder is deprecated: see corresponding Javadoc for more information. [warn] .setRole("prod") [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:329: method setRole in class Builder is deprecated: see corresponding Javadoc for more information. [warn] .setRole("prod") [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:334: method setRole in class Builder is deprecated: see corresponding Javadoc for more information. [warn] .setRole("dev") [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:339: method setRole in class Builder is deprecated: see corresponding Javadoc for more information. [warn] .setRole("dev") [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:380: method valueOf in object Status is deprecated: see corresponding Javadoc for more information. [warn] ).thenReturn(Status.valueOf(1)) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:397: method getRole in class Resource is deprecated: see corresponding Javadoc for more information. [warn] assert(cpusDev.getRole.equals("dev")) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:400: method getRole in class Resource is deprecated: see corresponding Javadoc for more information. [warn] r.getName.equals("mem") && r.getScalar.getValue.equals(484.0) && r.getRole.equals("prod") [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosFineGrainedSchedulerBackendSuite.scala:403: method getRole in class Resource is deprecated: see corresponding Javadoc for more information. [warn] r.getName.equals("cpus") && r.getScalar.getValue.equals(1.0) && r.getRole.equals("prod") [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtilsSuite.scala:54: method setRole in class Builder is deprecated: see corresponding Javadoc for more information. [warn] role.foreach { r => builder.setRole(r) } [warn] ^ [warn] 29 warnings found [info] Note: Some input files use or override a deprecated API. [info] Note: Recompile with -Xlint:deprecation for details. [warn] 7 warnings found [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/target/scala-2.11/spark-streaming-kafka-0-8_2.11-2.4.8-SNAPSHOT-tests.jar ... [info] Done packaging. [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/target/scala-2.11/spark-streaming-kafka-0-10_2.11-2.4.8-SNAPSHOT-tests.jar ... [info] Done packaging. [warn] 24 warnings found [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/target/scala-2.11/spark-mesos_2.11-2.4.8-SNAPSHOT-tests.jar ... [warn] one warning found [info] Done packaging. [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/kubernetes/core/target/scala-2.11/spark-kubernetes_2.11-2.4.8-SNAPSHOT-tests.jar ... [info] Done packaging. [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/yarn/target/scala-2.11/spark-yarn_2.11-2.4.8-SNAPSHOT-tests.jar ... [info] Done packaging. [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/graphx/target/scala-2.11/spark-graphx_2.11-2.4.8-SNAPSHOT-tests.jar ... [info] Done packaging. [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:360: class ParquetInputSplit in package hadoop is deprecated: see corresponding Javadoc for more information. [warn] new org.apache.parquet.hadoop.ParquetInputSplit( [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:371: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information. [warn] ParquetFileReader.readFooter(sharedConf, filePath, SKIP_ROW_GROUPS).getFileMetaData [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:544: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information. [warn] ParquetFileReader.readFooter( [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs) [warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock()) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs) [warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock()) [warn] ^ [info] Compiling 8 Scala sources and 2 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/target/scala-2.11/test-classes... [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/streaming/target/scala-2.11/spark-streaming_2.11-2.4.8-SNAPSHOT-tests.jar ... [info] Done packaging. [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/test/scala/org/apache/spark/streaming/kinesis/KinesisInputDStreamBuilderSuite.scala:163: method initialPositionInStream in class Builder is deprecated: use initialPosition(initialPosition: KinesisInitialPosition) [warn] .initialPositionInStream(InitialPositionInStream.AT_TIMESTAMP) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/test/scala/org/apache/spark/streaming/kinesis/KinesisStreamSuite.scala:103: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead [warn] val kinesisStream1 = KinesisUtils.createStream(ssc, "myAppName", "mySparkStream", [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/test/scala/org/apache/spark/streaming/kinesis/KinesisStreamSuite.scala:106: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead [warn] val kinesisStream2 = KinesisUtils.createStream(ssc, "myAppName", "mySparkStream", [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/test/scala/org/apache/spark/streaming/kinesis/KinesisStreamSuite.scala:113: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead [warn] val inputStream = KinesisUtils.createStream(ssc, appName, "dummyStream", [warn] ^ [warn] four warnings found [info] Note: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/test/java/org/apache/spark/streaming/kinesis/JavaKinesisInputDStreamBuilderSuite.java uses or overrides a deprecated API. [info] Note: Recompile with -Xlint:deprecation for details. [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/target/scala-2.11/spark-streaming-kinesis-asl_2.11-2.4.8-SNAPSHOT-tests.jar ... [info] Done packaging. [warn] 6 warnings found [info] Note: Some input files use or override a deprecated API. [info] Note: Recompile with -Xlint:deprecation for details. [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:128: value ENABLE_JOB_SUMMARY in object ParquetOutputFormat is deprecated: see corresponding Javadoc for more information. [warn] && conf.get(ParquetOutputFormat.ENABLE_JOB_SUMMARY) == null) { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:360: class ParquetInputSplit in package hadoop is deprecated: see corresponding Javadoc for more information. [warn] new org.apache.parquet.hadoop.ParquetInputSplit( [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:371: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information. [warn] ParquetFileReader.readFooter(sharedConf, filePath, SKIP_ROW_GROUPS).getFileMetaData [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:544: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information. [warn] ParquetFileReader.readFooter( [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs) [warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock()) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs) [warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock()) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs) [warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock()) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs) [warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock()) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs) [warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock()) [warn] [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/target/scala-2.11/spark-sql_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [info] Compiling 29 Scala sources and 2 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive/target/scala-2.11/classes... [info] Compiling 20 Scala sources and 1 Java source to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/target/scala-2.11/classes... [info] Compiling 304 Scala sources and 5 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/target/scala-2.11/classes... [info] Compiling 10 Scala sources and 1 Java source to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/target/scala-2.11/classes... [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaDataConsumer.scala:470: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information. [warn] val p = consumer.poll(pollTimeoutMs) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:119: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information. [warn] consumer.poll(0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:139: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information. [warn] consumer.poll(0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:187: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information. [warn] consumer.poll(0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:217: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information. [warn] consumer.poll(0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:293: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information. [warn] consumer.poll(0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/main/java/org/apache/spark/sql/avro/SparkAvroKeyOutputFormat.java:55: [unchecked] unchecked call to SparkAvroKeyRecordWriter(Schema,GenericData,CodecFactory,OutputStream,int,Map<String,String>) as a member of the raw type SparkAvroKeyRecordWriter [warn] return new SparkAvroKeyRecordWriter( [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/main/java/org/apache/spark/sql/avro/SparkAvroKeyOutputFormat.java:55: [unchecked] unchecked conversion [warn] return new SparkAvroKeyRecordWriter( [warn] ^ [warn] 6 warnings found [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/target/scala-2.11/spark-avro_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:119: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information. [warn] consumer.poll(0) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:139: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information. [warn] consumer.poll(0) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:187: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information. [warn] consumer.poll(0) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:217: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information. [warn] consumer.poll(0) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:293: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information. [warn] consumer.poll(0) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaDataConsumer.scala:470: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information. [warn] val p = consumer.poll(pollTimeoutMs) [warn] [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/target/scala-2.11/spark-sql-kafka-0-10_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [warn] there were 16 deprecation warnings; re-run with -deprecation for details [warn] one warning found [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive/target/scala-2.11/spark-hive_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [info] Compiling 12 Scala sources and 171 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/target/scala-2.11/classes... [info] Compiling 292 Scala sources and 33 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/target/scala-2.11/test-classes... [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/catalyst/target/scala-2.11/spark-catalyst_2.11-2.4.8-SNAPSHOT-tests.jar ... [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/src/gen/java/org/apache/hive/service/cli/thrift/TArrayTypeEntry.java:266: [unchecked] unchecked call to read(TProtocol,T) as a member of the raw type IScheme [warn] schemes.get(iprot.getScheme()).getScheme().read(iprot, this); [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/where T is a type-variable:T extends TBase declared in interface IScheme [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/src/gen/java/org/apache/hive/service/cli/thrift/TArrayTypeEntry.java:270: [unchecked] unchecked call to write(TProtocol,T) as a member of the raw type IScheme [warn] schemes.get(oprot.getScheme()).getScheme().write(oprot, this); [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/where T is a type-variable:T extends TBase declared in interface IScheme [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/src/gen/java/org/apache/hive/service/cli/thrift/TArrayTypeEntry.java:313: [unchecked] getScheme() in TArrayTypeEntryStandardSchemeFactory implements <S>getScheme() in SchemeFactory [warn] public TArrayTypeEntryStandardScheme getScheme() { [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/return type requires unchecked conversion from TArrayTypeEntryStandardScheme to S [warn] where S is a type-variable:S extends IScheme declared in method <S>getScheme() [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/src/gen/java/org/apache/hive/service/cli/thrift/TArrayTypeEntry.java:361: [unchecked] getScheme() in TArrayTypeEntryTupleSchemeFactory implements <S>getScheme() in SchemeFactory [warn] public TArrayTypeEntryTupleScheme getScheme() { [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/return type requires unchecked conversion from TArrayTypeEntryTupleScheme to S [warn] where S is a type-variable:S extends IScheme declared in method <S>getScheme() [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/src/gen/java/org/apache/hive/service/cli/thrift/TBinaryColumn.java:240: [unchecked] unchecked cast [warn] setValues((List<ByteBuffer>)value); [warn] ^ [info] Done packaging. [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/target/scala-2.11/spark-hive-thriftserver_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:138: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0. [warn] object OneHotEncoder extends DefaultParamsReadable[OneHotEncoder] { [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:141: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0. [warn] override def load(path: String): OneHotEncoder = super.load(path) [warn] ^ [warn] two warnings found [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:138: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0. [warn] object OneHotEncoder extends DefaultParamsReadable[OneHotEncoder] { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:141: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0. [warn] override def load(path: String): OneHotEncoder = super.load(path) [warn] [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/target/scala-2.11/spark-mllib_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [info] Compiling 6 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/target/scala-2.11/classes... [info] Compiling 191 Scala sources and 128 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/examples/target/scala-2.11/classes... [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:53: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path [warn] if (addedClasspath != "") { [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:54: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path [warn] settings.classpath append addedClasspath [warn] ^ [warn] two warnings found [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:53: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path [warn] if (addedClasspath != "") { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:54: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path [warn] settings.classpath append addedClasspath [warn] [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/target/scala-2.11/spark-repl_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [info] Compiling 3 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/target/scala-2.11/test-classes... [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/assembly/target/scala-2.11/jars/spark-assembly_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/assembly/target/scala-2.11/jars/spark-assembly_2.11-2.4.8-SNAPSHOT-tests.jar ... [info] Done packaging. [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/target/scala-2.11/spark-repl_2.11-2.4.8-SNAPSHOT-tests.jar ... [info] Done packaging. [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/examples/target/scala-2.11/jars/spark-examples_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/examples/target/scala-2.11/jars/spark-examples_2.11-2.4.8-SNAPSHOT-tests.jar ... [info] Done packaging. [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/sources/TextSocketStreamSuite.scala:393: non-variable type argument String in type (String, java.sql.Timestamp) is unchecked since it is eliminated by erasure [warn] .isInstanceOf[(String, Timestamp)]) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/sources/TextSocketStreamSuite.scala:392: non-variable type argument String in type (String, java.sql.Timestamp) is unchecked since it is eliminated by erasure [warn] assert(r.get().get(0, TextSocketReader.SCHEMA_TIMESTAMP) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQueryStatusAndProgressSuite.scala:204: postfix operator minute should be enabled [warn] by making the implicit value scala.language.postfixOps visible. [warn] This can be achieved by adding the import clause 'import scala.language.postfixOps' [warn] or by setting the compiler option -language:postfixOps. [warn] See the Scaladoc for value scala.language.postfixOps for a discussion [warn] why the feature should be explicitly enabled. [warn] eventually(timeout(1 minute)) { [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala:693: a pure expression does nothing in statement position; you may be omitting necessary parentheses [warn] q1 [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala:230: method explode in class Dataset is deprecated: use flatMap() or select() with functions.explode() instead [warn] df.explode("words", "word") { word: String => word.split(" ").toSeq }.select('word), [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala:238: method explode in class Dataset is deprecated: use flatMap() or select() with functions.explode() instead [warn] df.explode('letters) { [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala:288: method explode in class Dataset is deprecated: use flatMap() or select() with functions.explode() instead [warn] df.explode($"*") { case Row(prefix: String, csv: String) => [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala:295: method explode in class Dataset is deprecated: use flatMap() or select() with functions.explode() instead [warn] df.explode('prefix, 'csv) { case Row(prefix: String, csv: String) => [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowFramesSuite.scala:228: method rangeBetween in class WindowSpec is deprecated: Use the version with Long parameter types [warn] val window = Window.partitionBy($"value").orderBy($"key").rangeBetween(lit(0), lit(2)) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowFramesSuite.scala:242: method rangeBetween in class WindowSpec is deprecated: Use the version with Long parameter types [warn] val window = Window.partitionBy($"value").orderBy($"key").rangeBetween(currentRow, lit(2.5D)) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowFramesSuite.scala:242: method currentRow in object functions is deprecated: Use Window.currentRow [warn] val window = Window.partitionBy($"value").orderBy($"key").rangeBetween(currentRow, lit(2.5D)) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowFramesSuite.scala:259: method rangeBetween in class WindowSpec is deprecated: Use the version with Long parameter types [warn] .rangeBetween(currentRow, lit(CalendarInterval.fromString("interval 23 days 4 hours"))) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowFramesSuite.scala:259: method currentRow in object functions is deprecated: Use Window.currentRow [warn] .rangeBetween(currentRow, lit(CalendarInterval.fromString("interval 23 days 4 hours"))) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/ProcessingTimeSuite.scala:30: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs) [warn] def getIntervalMs(trigger: Trigger): Long = trigger.asInstanceOf[ProcessingTime].intervalMs [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetCompatibilityTest.scala:49: method readAllFootersInParallel in object ParquetFileReader is deprecated: see corresponding Javadoc for more information. [warn] ParquetFileReader.readAllFootersInParallel(hadoopConf, parquetFiles, true) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetInteroperabilitySuite.scala:178: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information. [warn] ParquetFileReader.readFooter(hadoopConf, part.getPath, NO_FILTER) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetTest.scala:133: method writeMetadataFile in object ParquetFileWriter is deprecated: see corresponding Javadoc for more information. [warn] ParquetFileWriter.writeMetadataFile(configuration, path, Seq(footer).asJava) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetTest.scala:148: method writeMetadataFile in object ParquetFileWriter is deprecated: see corresponding Javadoc for more information. [warn] ParquetFileWriter.writeMetadataFile(configuration, path, Seq(footer).asJava) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetTest.scala:154: method readAllFootersInParallel in object ParquetFileReader is deprecated: see corresponding Javadoc for more information. [warn] ParquetFileReader.readAllFootersInParallel(configuration, fs.getFileStatus(path)).asScala.toSeq [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetTest.scala:158: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information. [warn] ParquetFileReader.readFooter( [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/ProcessingTimeExecutorSuite.scala:51: class ConcurrentHashSet in package util is deprecated: see corresponding Javadoc for more information. [warn] val triggerTimes = new ConcurrentHashSet[Int] [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/ProcessingTimeExecutorSuite.scala:55: method apply in object ProcessingTime is deprecated: use Trigger.ProcessingTime(interval) [warn] val executor = ProcessingTimeExecutor(ProcessingTime("1000 milliseconds"), clock) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala:316: method apply in object ProcessingTime is deprecated: use Trigger.ProcessingTime(interval) [warn] StartStream(ProcessingTime("10 seconds"), new StreamManualClock), [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala:357: method apply in object ProcessingTime is deprecated: use Trigger.ProcessingTime(interval) [warn] StartStream(ProcessingTime("10 seconds"), new StreamManualClock(60 * 1000)), [warn] ^ [warn] 24 warnings found [info] Note: /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/test/java/test/org/apache/spark/sql/JavaDataFrameSuite.java uses or overrides a deprecated API. [info] Note: Recompile with -Xlint:deprecation for details. [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] Compiling 5 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/target/scala-2.11/test-classes... [info] Compiling 9 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/target/scala-2.11/test-classes... [info] Compiling 14 Scala sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/target/scala-2.11/test-classes... [info] Compiling 193 Scala sources and 66 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/target/scala-2.11/test-classes... [info] Compiling 88 Scala sources and 17 Java sources to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive/target/scala-2.11/test-classes... [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/target/scala-2.11/spark-sql_2.11-2.4.8-SNAPSHOT-tests.jar ... [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/HiveCliSessionStateSuite.scala:31: a pure expression does nothing in statement position; you may be omitting necessary parentheses [warn] try f finally SessionState.detachSession() [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaContinuousTest.scala:76: reflective access of structural type member value activeTaskIdCount should be enabled [warn] by making the implicit value scala.language.reflectiveCalls visible. [warn] This can be achieved by adding the import clause 'import scala.language.reflectiveCalls' [warn] or by setting the compiler option -language:reflectiveCalls. [warn] See the Scaladoc for value scala.language.reflectiveCalls for a discussion [warn] why the feature should be explicitly enabled. [warn] assert(tasksEndedListener.activeTaskIdCount.get() == 0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:141: constructor Field in class Field is deprecated: see corresponding Javadoc for more information. [warn] Seq(new Field("null", Schema.create(Type.NULL), "doc", null)).asJava [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:164: constructor Field in class Field is deprecated: see corresponding Javadoc for more information. [warn] val fields = Seq(new Field("field1", union, "doc", null)).asJava [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:192: constructor Field in class Field is deprecated: see corresponding Javadoc for more information. [warn] val fields = Seq(new Field("field1", union, "doc", null)).asJava [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:224: constructor Field in class Field is deprecated: see corresponding Javadoc for more information. [warn] val fields = Seq(new Field("field1", union, "doc", null)).asJava [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:250: constructor Field in class Field is deprecated: see corresponding Javadoc for more information. [warn] val fields = Seq(new Field("field1", UnionOfOne, "doc", null)).asJava [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:303: constructor Field in class Field is deprecated: see corresponding Javadoc for more information. [warn] new Field("field1", complexUnionType, "doc", null), [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:304: constructor Field in class Field is deprecated: see corresponding Javadoc for more information. [warn] new Field("field2", complexUnionType, "doc", null), [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:305: constructor Field in class Field is deprecated: see corresponding Javadoc for more information. [warn] new Field("field3", complexUnionType, "doc", null), [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:306: constructor Field in class Field is deprecated: see corresponding Javadoc for more information. [warn] new Field("field4", complexUnionType, "doc", null) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/src/test/scala/org/apache/spark/sql/avro/AvroSuite.scala:970: constructor Field in class Field is deprecated: see corresponding Javadoc for more information. [warn] val avroField = new Field(name, avroType, "", null) [warn] ^ [warn] one warning found [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive-thriftserver/target/scala-2.11/spark-hive-thriftserver_2.11-2.4.8-SNAPSHOT-tests.jar ... [info] Done packaging. [info] Done packaging. [warn] 10 warnings found [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/avro/target/scala-2.11/spark-avro_2.11-2.4.8-SNAPSHOT-tests.jar ... [info] Done packaging. [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:66: class ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead. [warn] private var zkUtils: ZkUtils = _ [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:95: class ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead. [warn] def zookeeperClient: ZkUtils = { [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:107: object ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead. [warn] zkUtils = ZkUtils(s"$zkHost:$zkPort", zkSessionTimeout, zkConnectionTimeout, false) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:198: method createTopic in object AdminUtils is deprecated: This method is deprecated and will be replaced by kafka.zk.AdminZkClient. [warn] AdminUtils.createTopic(zkUtils, topic, partitions, 1) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:225: method deleteTopic in object AdminUtils is deprecated: This method is deprecated and will be replaced by kafka.zk.AdminZkClient. [warn] AdminUtils.deleteTopic(zkUtils, topic) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:290: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information. [warn] kc.poll(0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:304: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information. [warn] kc.poll(0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:383: object ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead. [warn] !zkUtils.pathExists(getDeleteTopicPath(topic)), [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:384: object ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead. [warn] s"${getDeleteTopicPath(topic)} still exists") [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:385: object ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead. [warn] assert(!zkUtils.pathExists(getTopicPath(topic)), s"${getTopicPath(topic)} still exists") [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:385: object ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead. [warn] assert(!zkUtils.pathExists(getTopicPath(topic)), s"${getTopicPath(topic)} still exists") [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:409: class ZkUtils in package utils is deprecated: This is an internal class that is no longer used by Kafka and will be removed in a future release. Please use org.apache.kafka.clients.admin.AdminClient instead. [warn] zkUtils: ZkUtils, [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala:421: method deleteTopic in object AdminUtils is deprecated: This method is deprecated and will be replaced by kafka.zk.AdminZkClient. [warn] AdminUtils.deleteTopic(zkUtils, topic) [warn] ^ [warn] 14 warnings found [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/target/scala-2.11/spark-sql-kafka-0-10_2.11-2.4.8-SNAPSHOT-tests.jar ... [info] Done packaging. [warn] there were 25 deprecation warnings; re-run with -deprecation for details [warn] one warning found [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive/src/test/java/org/apache/spark/sql/hive/test/Complex.java:464: [unchecked] unchecked cast [warn] setLint((List<Integer>)value); [warn] ^ [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/hive/target/scala-2.11/spark-hive_2.11-2.4.8-SNAPSHOT-tests.jar ... [info] Done packaging. [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/clustering/KMeansSuite.scala:120: method computeCost in class KMeansModel is deprecated: This method is deprecated and will be removed in 3.0.0. Use ClusteringEvaluator instead. You can also get the cost on the training dataset in the summary. [warn] assert(model.computeCost(dataset) < 0.1) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/clustering/KMeansSuite.scala:135: method computeCost in class KMeansModel is deprecated: This method is deprecated and will be removed in 3.0.0. Use ClusteringEvaluator instead. You can also get the cost on the training dataset in the summary. [warn] assert(model.computeCost(dataset) == summary.trainingCost) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/clustering/KMeansSuite.scala:206: method computeCost in class KMeansModel is deprecated: This method is deprecated and will be removed in 3.0.0. Use ClusteringEvaluator instead. You can also get the cost on the training dataset in the summary. [warn] model.computeCost(dataset) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/feature/OneHotEncoderSuite.scala:46: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0. [warn] ParamsSuite.checkParams(new OneHotEncoder) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/feature/OneHotEncoderSuite.scala:51: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0. [warn] val encoder = new OneHotEncoder() [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/feature/OneHotEncoderSuite.scala:74: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0. [warn] val encoder = new OneHotEncoder() [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/feature/OneHotEncoderSuite.scala:96: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0. [warn] val encoder = new OneHotEncoder() [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/feature/OneHotEncoderSuite.scala:110: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0. [warn] val encoder = new OneHotEncoder() [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/feature/OneHotEncoderSuite.scala:121: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0. [warn] val t = new OneHotEncoder() [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/feature/OneHotEncoderSuite.scala:156: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0. [warn] val encoder = new OneHotEncoder() [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:52: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0. [warn] var df = readImages(imagePath) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:55: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0. [warn] df = readImages(imagePath, null, true, -1, false, 1.0, 0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:58: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0. [warn] df = readImages(imagePath, null, true, -1, true, 1.0, 0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:62: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0. [warn] df = readImages(imagePath, null, true, -1, true, 0.5, 0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:69: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0. [warn] val df = readImages(imagePath, null, false, 3, true, 1.0, 0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:74: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0. [warn] val df = readImages(imagePath + "/kittens/DP153539.jpg", null, false, 3, true, 1.0, 0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:79: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0. [warn] val df = readImages(imagePath + "/multi-channel/BGRA.png", null, false, 3, true, 1.0, 0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:84: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0. [warn] val df = readImages(imagePath + "/kittens/not-image.txt", null, false, 3, true, 1.0, 0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:90: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0. [warn] val df = readImages(imagePath + "/kittens/not-image.txt", null, false, 3, false, 1.0, 0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:96: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0. [warn] readImages(imagePath, null, true, 3, true, 1.1, 0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:103: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0. [warn] readImages(imagePath, null, true, 3, true, -0.1, 0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:109: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0. [warn] val df = readImages(imagePath, null, true, 3, true, 0.0, 0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:114: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0. [warn] val df = readImages(imagePath, sparkSession = spark, true, 3, true, 1.0, 0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:119: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0. [warn] val df = readImages(imagePath, null, true, 3, true, 1.0, 0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:124: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0. [warn] val df = readImages(imagePath, null, true, -3, true, 1.0, 0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:129: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0. [warn] val df = readImages(imagePath, null, true, 0, true, 1.0, 0) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/ml/image/ImageSchemaSuite.scala:136: method readImages in object ImageSchema is deprecated: use `spark.read.format("image").load(path)` and this `readImages` will be removed in 3.0.0. [warn] val images = readImages(imagePath + "/multi-channel/").collect [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/classification/LogisticRegressionSuite.scala:227: constructor LogisticRegressionWithSGD in class LogisticRegressionWithSGD is deprecated: Use ml.classification.LogisticRegression or LogisticRegressionWithLBFGS [warn] val lr = new LogisticRegressionWithSGD().setIntercept(true) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/classification/LogisticRegressionSuite.scala:303: constructor LogisticRegressionWithSGD in class LogisticRegressionWithSGD is deprecated: Use ml.classification.LogisticRegression or LogisticRegressionWithLBFGS [warn] val lr = new LogisticRegressionWithSGD().setIntercept(true) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/classification/LogisticRegressionSuite.scala:338: constructor LogisticRegressionWithSGD in class LogisticRegressionWithSGD is deprecated: Use ml.classification.LogisticRegression or LogisticRegressionWithLBFGS [warn] val lr = new LogisticRegressionWithSGD().setIntercept(true) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/classification/LogisticRegressionSuite.scala:919: object LogisticRegressionWithSGD in package classification is deprecated: Use ml.classification.LogisticRegression or LogisticRegressionWithLBFGS [warn] val model = LogisticRegressionWithSGD.train(points, 2) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/clustering/KMeansSuite.scala:369: method train in object KMeans is deprecated: Use train method without 'runs' [warn] val model = KMeans.train(points, 2, 2, 1, initMode) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/evaluation/MulticlassMetricsSuite.scala:80: value precision in class MulticlassMetrics is deprecated: Use accuracy. [warn] assert(math.abs(metrics.accuracy - metrics.precision) < delta) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/evaluation/MulticlassMetricsSuite.scala:81: value recall in class MulticlassMetrics is deprecated: Use accuracy. [warn] assert(math.abs(metrics.accuracy - metrics.recall) < delta) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/evaluation/MulticlassMetricsSuite.scala:82: value fMeasure in class MulticlassMetrics is deprecated: Use accuracy. [warn] assert(math.abs(metrics.accuracy - metrics.fMeasure) < delta) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/LassoSuite.scala:58: constructor LassoWithSGD in class LassoWithSGD is deprecated: Use ml.regression.LinearRegression with elasticNetParam = 1.0. Note the default regParam is 0.01 for LassoWithSGD, but is 0.0 for LinearRegression. [warn] val ls = new LassoWithSGD() [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/LassoSuite.scala:102: constructor LassoWithSGD in class LassoWithSGD is deprecated: Use ml.regression.LinearRegression with elasticNetParam = 1.0. Note the default regParam is 0.01 for LassoWithSGD, but is 0.0 for LinearRegression. [warn] val ls = new LassoWithSGD() [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/LassoSuite.scala:156: object LassoWithSGD in package regression is deprecated: Use ml.regression.LinearRegression with elasticNetParam = 1.0. Note the default regParam is 0.01 for LassoWithSGD, but is 0.0 for LinearRegression. [warn] val model = LassoWithSGD.train(points, 2) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/LinearRegressionSuite.scala:49: constructor LinearRegressionWithSGD in class LinearRegressionWithSGD is deprecated: Use ml.regression.LinearRegression or LBFGS [warn] val linReg = new LinearRegressionWithSGD().setIntercept(true) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/LinearRegressionSuite.scala:75: constructor LinearRegressionWithSGD in class LinearRegressionWithSGD is deprecated: Use ml.regression.LinearRegression or LBFGS [warn] val linReg = new LinearRegressionWithSGD().setIntercept(false) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/LinearRegressionSuite.scala:106: constructor LinearRegressionWithSGD in class LinearRegressionWithSGD is deprecated: Use ml.regression.LinearRegression or LBFGS [warn] val linReg = new LinearRegressionWithSGD().setIntercept(false) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/LinearRegressionSuite.scala:163: object LinearRegressionWithSGD in package regression is deprecated: Use ml.regression.LinearRegression or LBFGS [warn] val model = LinearRegressionWithSGD.train(points, 2) [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/RidgeRegressionSuite.scala:63: constructor LinearRegressionWithSGD in class LinearRegressionWithSGD is deprecated: Use ml.regression.LinearRegression or LBFGS [warn] val linearReg = new LinearRegressionWithSGD() [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/RidgeRegressionSuite.scala:71: constructor RidgeRegressionWithSGD in class RidgeRegressionWithSGD is deprecated: Use ml.regression.LinearRegression with elasticNetParam = 0.0. Note the default regParam is 0.01 for RidgeRegressionWithSGD, but is 0.0 for LinearRegression. [warn] val ridgeReg = new RidgeRegressionWithSGD() [warn] ^ [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/test/scala/org/apache/spark/mllib/regression/RidgeRegressionSuite.scala:113: object RidgeRegressionWithSGD in package regression is deprecated: Use ml.regression.LinearRegression with elasticNetParam = 0.0. Note the default regParam is 0.01 for RidgeRegressionWithSGD, but is 0.0 for LinearRegression. [warn] val model = RidgeRegressionWithSGD.train(points, 2) [warn] ^ [warn] 45 warnings found [info] Note: Some input files use or override a deprecated API. [info] Note: Recompile with -Xlint:deprecation for details. [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/target/scala-2.11/spark-mllib_2.11-2.4.8-SNAPSHOT-tests.jar ... [info] Done packaging. [success] Total time: 580 s, completed Jan 17, 2021 5:16:38 PM [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/SSLOptions.scala:71: constructor SslContextFactory in class SslContextFactory is deprecated: see corresponding Javadoc for more information. [warn] val sslContextFactory = new SslContextFactory() [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2 [warn] param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2 [warn] param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:161: method isRunningLocally in class TaskContext is deprecated: Local execution was removed, so this always returns false [warn] override def isRunningLocally(): Boolean = taskContext.isRunningLocally() [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/scheduler/StageInfo.scala:59: value attemptId in class StageInfo is deprecated: Use attemptNumber instead [warn] def attemptNumber(): Int = attemptId [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/api/r/RBackend.scala:102: method childGroup in class ServerBootstrap is deprecated: see corresponding Javadoc for more information. [warn] if (bootstrap != null && bootstrap.childGroup() != null) { [warn] [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/spark-core_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:633: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration [warn] KafkaUtils.createStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder]( [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:647: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] offsetRanges: JList[OffsetRange], [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:648: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration [warn] leaders: JMap[TopicAndPartition, Broker]): JavaRDD[(Array[Byte], Array[Byte])] = { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:657: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] offsetRanges: JList[OffsetRange], [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:658: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration [warn] leaders: JMap[TopicAndPartition, Broker]): JavaRDD[Array[Byte]] = { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:670: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] offsetRanges: JList[OffsetRange], [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:671: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration [warn] leaders: JMap[TopicAndPartition, Broker], [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:673: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration [warn] KafkaUtils.createRDD[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V]( [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:676: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] offsetRanges.toArray(new Array[OffsetRange](offsetRanges.size())), [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:720: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val kc = new KafkaCluster(Map(kafkaParams.asScala.toSeq: _*)) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:721: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration [warn] KafkaUtils.getFromOffsets( [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:725: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration [warn] KafkaUtils.createDirectStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V]( [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration [warn] def createBroker(host: String, port: JInt): Broker = Broker(host, port) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: object Broker in package kafka is deprecated: Update to Kafka 0.10 integration [warn] def createBroker(host: String, port: JInt): Broker = Broker(host, port) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:740: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] def offsetRangesOfKafkaRDD(rdd: RDD[_]): JList[OffsetRange] = { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:89: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration [warn] protected val kc = new KafkaCluster(kafkaParams) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:172: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] OffsetRange(tp.topic, tp.partition, fo, uo.offset) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:217: object KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val leaders = KafkaCluster.checkErrors(kc.findLeaders(topics)) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:222: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] context.sparkContext, kafkaParams, b.map(OffsetRange(_)), leaders, messageHandler) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:58: trait HasOffsetRanges in package kafka is deprecated: Update to Kafka 0.10 integration [warn] ) extends RDD[R](sc, Nil) with Logging with HasOffsetRanges { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val offsetRanges: Array[OffsetRange], [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val offsetRanges: Array[OffsetRange], [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:152: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val kc = new KafkaCluster(kafkaParams) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:268: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] OffsetRange(tp.topic, tp.partition, fo, uo.offset) [warn] [info] Checking every *.class/*.jar file's SHA-1. [warn] Strategy 'discard' was applied to a file [warn] Strategy 'filterDistinctLines' was applied to 7 files [warn] Strategy 'first' was applied to 95 files [info] SHA-1: 177496adeb4b784b30fce8614a984fff59e4b551 [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8-assembly/target/scala-2.11/spark-streaming-kafka-0-8-assembly-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [success] Total time: 17 s, completed Jan 17, 2021 5:16:55 PM [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/SSLOptions.scala:71: constructor SslContextFactory in class SslContextFactory is deprecated: see corresponding Javadoc for more information. [warn] val sslContextFactory = new SslContextFactory() [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2 [warn] param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2 [warn] param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:161: method isRunningLocally in class TaskContext is deprecated: Local execution was removed, so this always returns false [warn] override def isRunningLocally(): Boolean = taskContext.isRunningLocally() [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/scheduler/StageInfo.scala:59: value attemptId in class StageInfo is deprecated: Use attemptNumber instead [warn] def attemptNumber(): Int = attemptId [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/api/r/RBackend.scala:102: method childGroup in class ServerBootstrap is deprecated: see corresponding Javadoc for more information. [warn] if (bootstrap != null && bootstrap.childGroup() != null) { [warn] [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/spark-core_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:259: object FlumeUtils in package flume is deprecated: Deprecated without replacement [warn] val dstream = FlumeUtils.createStream(jssc, hostname, port, storageLevel, enableDecompression) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:275: object FlumeUtils in package flume is deprecated: Deprecated without replacement [warn] val dstream = FlumeUtils.createPollingStream( [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement [warn] val stream = FlumeUtils.createPollingStream(ssc, host, port) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement [warn] val stream = FlumeUtils.createPollingStream(ssc, host, port) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumeEventCount.scala:59: object FlumeUtils in package flume is deprecated: Deprecated without replacement [warn] val stream = FlumeUtils.createStream(ssc, host, port, StorageLevel.MEMORY_ONLY_SER_2) [warn] [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] Checking every *.class/*.jar file's SHA-1. [warn] Strategy 'discard' was applied to a file [warn] Strategy 'filterDistinctLines' was applied to 7 files [warn] Strategy 'first' was applied to 88 files [info] SHA-1: 06f676839096880cf7d44ef6f12cafcbc4120dea [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume-assembly/target/scala-2.11/spark-streaming-flume-assembly-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [success] Total time: 16 s, completed Jan 17, 2021 5:17:11 PM [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/SSLOptions.scala:71: constructor SslContextFactory in class SslContextFactory is deprecated: see corresponding Javadoc for more information. [warn] val sslContextFactory = new SslContextFactory() [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2 [warn] param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2 [warn] param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:161: method isRunningLocally in class TaskContext is deprecated: Local execution was removed, so this always returns false [warn] override def isRunningLocally(): Boolean = taskContext.isRunningLocally() [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/scheduler/StageInfo.scala:59: value attemptId in class StageInfo is deprecated: Use attemptNumber instead [warn] def attemptNumber(): Int = attemptId [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/api/r/RBackend.scala:102: method childGroup in class ServerBootstrap is deprecated: see corresponding Javadoc for more information. [warn] if (bootstrap != null && bootstrap.childGroup() != null) { [warn] [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/spark-core_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:140: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information. [warn] private val client = new AmazonKinesisClient(credentials) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:150: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information. [warn] client.setEndpoint(endpointUrl) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:219: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information. [warn] getRecordsRequest.setRequestCredentials(credentials) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:240: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information. [warn] getShardIteratorRequest.setRequestCredentials(credentials) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:606: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead [warn] KinesisUtils.createStream(jssc.ssc, kinesisAppName, streamName, endpointUrl, regionName, [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:613: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead [warn] KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName, [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:616: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead [warn] KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName, [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:107: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information. [warn] val kinesisClient = new AmazonKinesisClient(credentials) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:108: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information. [warn] kinesisClient.setEndpoint(endpointUrl) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:223: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information. [warn] val kinesisClient = new AmazonKinesisClient(new DefaultAWSCredentialsProviderChain()) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:224: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information. [warn] kinesisClient.setEndpoint(endpoint) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/SparkAWSCredentials.scala:76: method withLongLivedCredentialsProvider in class Builder is deprecated: see corresponding Javadoc for more information. [warn] .withLongLivedCredentialsProvider(longLivedCreds.provider) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:58: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information. [warn] val client = new AmazonKinesisClient(KinesisTestUtils.getAWSCredentials()) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:59: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information. [warn] client.setEndpoint(endpointUrl) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:64: constructor AmazonDynamoDBClient in class AmazonDynamoDBClient is deprecated: see corresponding Javadoc for more information. [warn] val dynamoDBClient = new AmazonDynamoDBClient(new DefaultAWSCredentialsProviderChain()) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:65: method setRegion in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information. [warn] dynamoDBClient.setRegion(RegionUtils.getRegion(regionName)) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisReceiver.scala:187: constructor Worker in class Worker is deprecated: see corresponding Javadoc for more information. [warn] worker = new Worker(recordProcessorFactory, kinesisClientLibConfiguration) [warn] [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] Checking every *.class/*.jar file's SHA-1. [warn] Strategy 'discard' was applied to 2 files [warn] Strategy 'filterDistinctLines' was applied to 8 files [warn] Strategy 'first' was applied to 50 files [info] SHA-1: f43f2d192c98ad103d1a490b1b3bc57922fc7337 [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl-assembly/target/scala-2.11/spark-streaming-kinesis-asl-assembly-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [success] Total time: 17 s, completed Jan 17, 2021 5:17:28 PM ======================================================================== Detecting binary incompatibilities with MiMa ======================================================================== [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.rules.RuleExecutor.Strategy [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NamedQueryContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.InBlock [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.TrainValidationSplit.TrainValidationSplitReader [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.LocalLDAModel.SaveLoadV1_0.$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.launcher.Main.MainClassOptionParser [WARN] Unable to detect inner functions for class:org.apache.spark.ml.util.DefaultParamsReader.Metadata [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.OpenHashMapBasedStateMap.StateInfo [WARN] Unable to detect inner functions for class:org.apache.spark.rpc.netty.RpcEndpointVerifier.CheckExistence [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SetupDriver Error instrumenting class:org.apache.spark.mapred.SparkHadoopMapRedUtil$ [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.MultilayerPerceptronClassifierWrapper.MultilayerPerceptronClassifierWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.StringIndexModelWriter.$$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherProtocol.Hello [WARN] Unable to detect inner functions for class:org.apache.spark.security.CryptoStreamUtils.CryptoHelperChannel [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ImputerModel.ImputerModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IdentifierCommentListContext [WARN] Unable to detect inner functions for class:org.apache.spark.util.SignalUtils.ActionHandler [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IntervalContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QuerySpecificationContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator17$1 [WARN] Unable to detect inner functions for class:org.apache.spark.util.Benchmark.Result [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.plans [WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.DateAccessor [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.OneHotEncoderModel.OneHotEncoderModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.ImplicitTypeCasts [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveRdd [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleTableIdentifierContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.ZooKeeperLeaderElectionAgent.LeadershipStatus [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.UnsetTablePropertiesContext [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillTask [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AggregationContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LogisticRegressionWrapper.LogisticRegressionWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IntervalValueContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.FeatureHasher.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel.MultilayerPerceptronClassificationModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.RunLengthEncoding.Decoder [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LogisticRegressionModel.LogisticRegressionModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterChanged [WARN] Unable to detect inner functions for class:org.apache.spark.rdd.DefaultPartitionCoalescer.PartitionLocations [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.StateStoreAwareZipPartitionsHelper [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.GeneralizedLinearRegressionModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.launcher.SparkAppHandle.Listener [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.debug.DebugExec [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LDAWrapper.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.api.python.SerDeUtil.ArrayConstructor [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestWorkerState Error instrumenting class:org.apache.spark.sql.execution.streaming.HDFSMetadataLog$FileSystemManager [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.KVTypeInfo.Accessor [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.UnsafeSorterSpillMerger.$1 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisteredWorker [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterApplication [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FromClauseContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateKeyWatermarkPredicate [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ColumnReferenceContext [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterChangeAcknowledged [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QueryTermDefaultContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ComparisonContext [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetMatchingBlockIds [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutors [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.PythonMLLibAPI.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ApplicationRemoved [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyAndNumValues [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QueryTermContext [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0.Data Error instrumenting class:org.apache.spark.input.StreamInputFormat [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.CaseWhenCoercion [WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.RetryingBlockFetcher.BlockFetchStarter [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RetrieveSparkAppConfig [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ProbabilisticClassificationModel.$$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator16$1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.$typecreator5$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.BisectingKMeansModel.$$typecreator1$1 Error instrumenting class:org.apache.spark.mllib.regression.IsotonicRegressionModel$SaveLoadV1_0$ [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.PrefixSpan.Prefix [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyToNumValuesType [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.InMemoryFileIndex.SerializableBlockLocation [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.stat.test.ChiSqTest.Method [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ExtractWindowExpressions [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator8$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.rules.RuleExecutor.Batch [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.ChiSquareTest.ChiSquareResult [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.LevelDB.PrefixCache [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AliasedRelationContext [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.PrefixSpan.FreqSequence [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.DecisionTreeRegressionModel.DecisionTreeRegressionModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.RelationalGroupedDataset.CubeType [WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator9$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveGroupingAnalytics Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). Error instrumenting class:org.apache.spark.deploy.SparkSubmit$ [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.NodeData [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LogisticRegressionModel.LogisticRegressionModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0.Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.Pipeline.PipelineReader [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.ArrayWrappers.ComparableObjectArray [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.Division [WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator7$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ClassificationModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.KMeansModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.scheduler.ReceiverTracker.TrackerState 21/01/17 17:18:12 WARN Utils: Your hostname, research-jenkins-worker-09 resolves to a loopback address: 127.0.1.1; using 192.168.10.31 instead (on interface enp4s0f0) 21/01/17 17:18:12 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address [WARN] Unable to detect inner functions for class:org.apache.spark.sql.streaming.StreamingQueryListener.QueryTerminatedEvent [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ParenthesizedExpressionContext Error instrumenting class:org.apache.spark.mllib.clustering.LocalLDAModel$SaveLoadV1_0$ [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0.Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableProviderContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.ResolveHints.ResolveBroadcastHints Error instrumenting class:org.apache.spark.sql.execution.command.DDLUtils$ [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.DecisionTreeRegressorWrapper.DecisionTreeRegressorWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.UnregisterApplication [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SampleByBytesContext [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterWorkerFailed [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.SplitData [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LDAWrapper.$$typecreator4$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.optimizer.JoinReorderDP.JoinPlan [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.expressions [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateFunctionContext Error instrumenting class:org.apache.spark.sql.execution.streaming.CommitLog [WARN] Unable to detect inner functions for class:org.apache.spark.network.client.TransportClient.$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LDAWrapper.LDAWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.LocationSpecContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.PartitioningUtils.PartitionValues [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper.GeneralizedLinearRegressionWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.IDFModelWriter.$$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.DriverStatusResponse [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.optimization.LBFGS.CostFun [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.OneHotEncoderModel.OneHotEncoderModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TablePropertyKeyContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.param.shared.SharedParamsCodeGen.ParamDesc [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.GaussianMixtureModel.SaveLoadV1_0.$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.HDFSBackedStateStore.COMMITTED [WARN] Unable to detect inner functions for class:org.apache.spark.network.client.TransportClientFactory.ClientPool Error instrumenting class:org.apache.spark.scheduler.SplitInfo$ [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.Data [WARN] Unable to detect inner functions for class:org.apache.spark.SparkBuildInfo [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AddTablePartitionContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SmallIntLiteralContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.python.MLSerDe.SparseMatrixPickler Error instrumenting class:org.apache.spark.api.python.DoubleArrayWritable [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.TrainValidationSplitModel.TrainValidationSplitModelReader Error instrumenting class:org.apache.spark.sql.execution.datasources.orc.OrcUtils$ [WARN] Unable to detect inner functions for class:org.apache.spark.api.java.JavaUtils.SerializableMapWrapper [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyToNumValuesStore [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.DecisionTreeRegressorWrapper.DecisionTreeRegressorWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.AnalysisErrorAt [WARN] Unable to detect inner functions for class:org.apache.spark.ui.JettyUtils.ServletParams [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.rules.RuleExecutor.Once [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RepairTableContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.EnsembleModelReadWrite.EnsembleNodeData [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.DataSource.SourceInfo [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.SparkSubmitUtils.MavenCoordinate [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.aggregate.ApproximatePercentile.PercentileDigest [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NamedExpressionContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RowConstructorContext [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV1_0 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.OptimizeMetadataOnlyQuery.PartitionedRelation Error instrumenting class:org.apache.spark.deploy.SparkHadoopUtil$ [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorSizeHint.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.AssociationRules.Rule [WARN] Unable to detect inner functions for class:org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette.ClusterStats [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.RepeatedGroupConverter [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeans.ClusterSummaryAggregator [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.CheckpointWriter.CheckpointWriteHandler [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.WindowDefContext [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalSorter.SpillReader [WARN] Unable to detect inner functions for class:org.apache.spark.rpc.netty.Dispatcher.EndpointData [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveNewInstance [WARN] Unable to detect inner functions for class:org.apache.spark.ml.fpm.FPGrowthModel.FPGrowthModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.DateConverter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.optim.WeightedLeastSquares.Aggregator [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalSorter.IteratorForPartition [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.MaxAbsScalerModelWriter.$$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.MapConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetDatabasePropertiesContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetQuantifierContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.MemorySink.AddedData [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.client.StandaloneAppClient.ClientEndpoint [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveShuffle [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CtesContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.plans.DslLogicalPlan [WARN] Unable to detect inner functions for class:org.apache.spark.annotation.InterfaceStability.Evolving [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GBTRegressorWrapper.GBTRegressorWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.$$typecreator4$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.KMeansWrapper.KMeansWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.PowerIterationClusteringModel.SaveLoadV1_0.$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.scheduler.ReceiverTracker.ReceiverTrackerEndpoint [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.BisectingKMeansWrapper.BisectingKMeansWrapperWriter Error instrumenting class:org.apache.spark.sql.execution.streaming.HDFSMetadataLog$FileContextManager [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.IdentityProjection Error instrumenting class:org.apache.spark.internal.io.HadoopMapRedCommitProtocol [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.UnsafeInMemorySorter.$SortedIterator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.MultiInsertQueryContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.PCAModelWriter.$$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.fpm.FPGrowthModel.FPGrowthModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel.MultilayerPerceptronClassificationModelWriter.Data Error instrumenting class:org.apache.spark.sql.execution.streaming.OffsetSeqLog [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.ChiSquareTest.$typecreator5$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SubqueryContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.WidenSetOperationTypes [WARN] Unable to detect inner functions for class:org.apache.spark.network.crypto.TransportCipher.EncryptionHandler [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LDAModel.$$typecreator2$1 Error instrumenting class:org.apache.spark.internal.io.HadoopMapReduceWriteConfigUtil [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GBTRegressionModel.GBTRegressionModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LogisticRegressionWrapper.LogisticRegressionWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.DateTimeOperations [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SetupDriver [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetMatchingBlockIds [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyWithIndexAndValue [WARN] Unable to detect inner functions for class:org.apache.spark.InternalAccumulator.output [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.CatalystTypeConverter [WARN] Unable to detect inner functions for class:org.apache.spark.SparkConf.DeprecatedConfig [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StrictIdentifierContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ComparisonOperatorContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.EltCoercion [WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SparkSaslClient.$ClientCallbackHandler [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.DriverStatusResponse [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.$$typecreator4$1 [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutorResponse Error instrumenting class:org.apache.spark.launcher.InProcessLauncher [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator5$1 [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.BlacklistTracker.ExecutorFailureList [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.ResolveTableValuedFunctions.ArgumentList [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.receiver.BlockGenerator.Block [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SparkAppConfig [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator7$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.sources.MemorySinkV2.AddedData [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.BigIntLiteralContext [WARN] Unable to detect inner functions for class:org.apache.spark.network.server.TransportServer.$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.Count [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RetrieveLastAllocatedExecutorId [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StopWordsRemover.$$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.SignedPrefixComparatorDesc [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0.Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator11$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.LongDelta.Encoder [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.GaussianMixtureModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.FloatConverter [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillExecutor [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.BinaryPrefixComparator [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.CountVectorizerModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.rdd.InputFileBlockHolder.FileBlock [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FailNativeCommandContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ExpressionContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCheckResult.TypeCheckFailure [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleTableSchemaContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.optimizer.StarSchemaDetection.TableAccessCardinality [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator12$1 [WARN] Unable to detect inner functions for class:org.apache.spark.util.Benchmark.Timer [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.StringIndexModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.optim.WeightedLeastSquares.Cholesky [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LDAWrapper.LDAWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PredicateOperatorContext [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.stat.test.KolmogorovSmirnovTest.NullHypothesis Error instrumenting class:org.apache.spark.deploy.master.ui.MasterWebUI [WARN] Unable to detect inner functions for class:org.apache.spark.sql.RelationalGroupedDataset.GroupType [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.ArrayConverter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Family [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.history.AppListingListener.MutableAttemptInfo [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.BlockManagerHeartbeat [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.DecisionTreeClassificationModel.DecisionTreeClassificationModelWriter.$$typecreator1$1 Error instrumenting class:org.apache.spark.sql.execution.datasources.DataSource$ [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StopExecutors [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillDriver Error instrumenting class:org.apache.spark.mllib.clustering.GaussianMixtureModel$SaveLoadV1_0$ [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.GaussianMixtureModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.RelationalGroupedDataset.RollupType [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DropTableContext [WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.mesos.MesosExternalShuffleClient.1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableValuedFunctionContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.IntDelta.Decoder [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.StopBlockManagerMaster [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.KMeansModelReader.$$typecreator7$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.ALSModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.OneForOneBlockFetcher.$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.aggregate.TypedAverage.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeFixedWidthAggregationMap.$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.Min Error instrumenting class:org.apache.spark.sql.execution.streaming.state.StateStoreProvider$ [WARN] Unable to detect inner functions for class:org.apache.spark.storage.StorageStatus.NonRddStorageInfo [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ColTypeContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.OrderedIdentifierContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.FileFormatWriter.EmptyDirectoryWriteTask 21/01/17 17:18:14 WARN BLAS: Failed to load implementation from: com.github.fommil.netlib.NativeSystemBLAS 21/01/17 17:18:14 WARN BLAS: Failed to load implementation from: com.github.fommil.netlib.NativeRefBLAS [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.FileFormatWriter.OutputSpec [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.RandomForestClassificationModel.RandomForestClassificationModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.$typecreator4$1 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerLatestState [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAttributeRewriter.VectorAttributeRewriterWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.linalg.distributed.RowMatrix.$SVDMode$2$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeExternalRowSorter.PrefixComputer [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.OneHotEncoderModel.OneHotEncoderModelWriter.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.PromoteStrings [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.StreamingDeduplicationStrategy [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.ArrayWrappers.ComparableLongArray [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InsertIntoContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.RandomForestRegressionModel.RandomForestRegressionModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetLocations [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LinearSVCModel.LinearSVCWriter.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.Rating [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NumericLiteralContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PartitionValContext [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.BlockManagerHeartbeat [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QueryPrimaryDefaultContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LinearSVCModel.LinearSVCWriter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Tweedie [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel.BucketedRandomProjectionLSHModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.UnsignedPrefixComparator [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorSizeHint.$$typecreator3$1 Error instrumenting class:org.apache.spark.sql.execution.datasources.TextBasedFileFormat [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowDatabasesContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InlineTableContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.GBTClassificationModel.GBTClassificationModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager [WARN] Unable to detect inner functions for class:org.apache.spark.network.util.TransportFrameDecoder.Interceptor [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerRemoved [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.FixedLengthRowBasedKeyValueBatch.$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Bucketizer.BucketizerWriter [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDe.LabeledPointPickler [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.CoalesceExec.EmptyPartition [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Index [WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.SuccessFetchResult [WARN] Unable to detect inner functions for class:org.apache.spark.network.util.LevelDBProvider.LevelDBLogger [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.ToBlockManagerSlave [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.CrossValidatorModel.CrossValidatorModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetArrayConverter.ElementConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleInsertQueryContext [WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.SuccessFetchResult [WARN] Unable to detect inner functions for class:org.apache.spark.shuffle.sort.ShuffleInMemorySorter.1 [WARN] Unable to detect inner functions for class:org.apache.spark.network.client.TransportClientFactory.$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.errors.TreeNodeException [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.PassThrough.Encoder [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator9$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionModel.IsotonicRegressionModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RefreshTableContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.RandomForestClassifierWrapper.RandomForestClassifierWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.DecisionTreeClassificationModel.DecisionTreeClassificationModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Index [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator7$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.StringIndexModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.StateStoreType [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.recommendation.MatrixFactorizationModel.SaveLoadV1_0.$typecreator13$1 [WARN] Unable to detect inner functions for class:org.apache.spark.launcher.SparkAppHandle.State [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.aggregate.TypedAverage.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.UnsignedPrefixComparatorDescNullsFirst [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutorsOnHost [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.MapConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.CTESubstitution [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TypeConstructorContext Error instrumenting class:org.apache.spark.SSLOptions [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestDriverStatus [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.streaming.InternalOutputModes.Append [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.orc.OrcDeserializer.ArrayDataUpdater [WARN] Unable to detect inner functions for class:org.apache.spark.launcher.SparkSubmitCommandBuilder.1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CastContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.RelationalGroupedDataset.PivotType [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.ALSModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LocalLDAModel.LocalLDAModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableAliasContext [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestExecutors [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Logit [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ImplicitOperators [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorSlicer.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StopExecutor [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.AFTSurvivalRegressionModelReader Error instrumenting class:org.apache.spark.sql.execution.streaming.HDFSMetadataLog$FileManager [WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.CatalogImpl.$$typecreator2$1 Error instrumenting class:org.apache.spark.input.WholeTextFileInputFormat [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveRdd [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.BooleanExpressionContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveMissingReferences [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.UnquotedIdentifierContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.PrimitiveConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveNaturalAndUsingJoin [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.LaunchDriver Error instrumenting class:org.apache.spark.deploy.history.HistoryServer Error instrumenting class:org.apache.spark.sql.execution.streaming.ManifestFileCommitProtocol [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.CountVectorizerModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.Word2VecModel.SaveLoadV1_0.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateValueWatermarkPredicate Error instrumenting class:org.apache.spark.api.python.TestOutputKeyConverter [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.optimization.NNLS.Workspace [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DataTypeContext [WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SparkSaslServer.1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QuotedIdentifierContext [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.Word2VecModel.SaveLoadV1_0 Error instrumenting class:org.apache.spark.api.python.TestWritable [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.BisectingKMeansModel.BisectingKMeansModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SparkAppConfig [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ProbabilisticClassificationModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.WriteStyle.RawStyle [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.SendHeartbeat [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerRemoved Error instrumenting class:org.apache.spark.deploy.FaultToleranceTest$delayedInit$body [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LogisticRegressionModel.LogisticRegressionModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.Decimal.DecimalAsIfIntegral [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeMax [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.LeftSide [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.optimizer.Optimizer.OptimizeSubqueries [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.stat.StatFunctions.CovarianceCounter [WARN] Unable to detect inner functions for class:org.apache.spark.annotation.InterfaceStability.Unstable [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RecoverPartitionsContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.MaxAbsScalerModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette.$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.SQLConf.ParquetOutputTimestampType Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.ParquetReadSupport [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.OpenHashSet.Hasher Error instrumenting class:org.apache.spark.input.StreamBasedRecordReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.executor.Executor.TaskReaper [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.HDFSBackedStateStore.STATE [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.LocalIndexEncoder [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator6$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.Pipeline.SharedReadWrite [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.StandardScalerModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.SQLConf.Replaced [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.StringType.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.InMemoryFileIndex.SerializableFileStatus [WARN] Unable to detect inner functions for class:org.apache.spark.network.server.RpcHandler.1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.CrossValidator.CrossValidatorWriter [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetMapConverter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.PCAModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.rules.RuleExecutor.FixedPoint [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterInStandby [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.ProgressReporter.ExecutionStats [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.catalog.InMemoryCatalog.DatabaseDesc [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.ChainedIterator [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillDriverResponse [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.SizeTracker.Sample [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StopWordsRemover.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.$$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV1_0.$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.FloatType.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator18$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.python.MLSerDe.SparseVectorPickler [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetLocationsAndStatus [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DereferenceContext Error instrumenting class:org.apache.spark.sql.execution.datasources.csv.MultiLineCSVDataSource$ Error instrumenting class:org.apache.spark.deploy.security.HBaseDelegationTokenProvider [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveBlock [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.DriverEndpoint [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.kafka010.CachedKafkaConsumer.CacheKey [WARN] Unable to detect inner functions for class:org.apache.spark.internal.io.FileCommitProtocol.TaskCommitMessage [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetExecutorEndpointRef [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Link [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IntegerLiteralContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.GeneralizedLinearRegressionModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.LookupFunctions [WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.BooleanAccessor [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.streaming.InternalOutputModes.Complete Error instrumenting class:org.apache.spark.input.StreamFileInputFormat [WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator10$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AnalyzeContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.ChiSquareTest.ChiSquareResult [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.TimestampConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowFunctionsContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.DoubleAccessor [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StructContext [WARN] Unable to detect inner functions for class:org.apache.spark.MapOutputTrackerMaster.MessageLoop [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateDatabaseContext Error instrumenting class:org.apache.spark.ml.image.SamplePathFilter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PredicatedContext [WARN] Unable to detect inner functions for class:org.apache.spark.network.server.RpcHandler.OneWayRpcCallback [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RelationPrimaryContext [WARN] Unable to detect inner functions for class:org.apache.spark.rdd.NewHadoopRDD.NewHadoopMapPartitionsWithSplitRDD [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InsertOverwriteDirContext Error instrumenting class:org.apache.spark.input.FixedLengthBinaryInputFormat [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.scheduler.StreamingListenerBus.WrappedStreamingListenerEvent Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase$NullIntIterator [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel.BucketedRandomProjectionLSHModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SortItemContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveSubquery [WARN] Unable to detect inner functions for class:org.apache.spark.ml.attribute.AttributeType.Numeric$2$ [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.TreeEnsembleModel.SaveLoadV1_0.EnsembleNodeData [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.Heartbeat [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LinearSVCModel.LinearSVCWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.dstream.ReceiverInputDStream.ReceiverRateController [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.GaussianMixtureModel.SaveLoadV1_0.$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Key [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.HashingTF.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.rdd.HadoopRDD.HadoopMapPartitionsWithSplitRDD [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Word2VecModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.graphx.util.BytecodeUtils.MethodInvocationFinder [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.TreeEnsembleModel.SaveLoadV1_0.$typecreator5$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.$$typecreator30$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.BooleanEquality [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.ByteConverter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.VectorIndexerModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SparkSaslServer.$DigestCallbackHandler [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinExec.OneSideHashJoiner [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.OpenHashSet.IntHasher [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.HDFSBackedStateStore.ABORTED [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.ByteType.$$typecreator1$1 Error instrumenting class:org.apache.spark.sql.execution.datasources.PartitioningUtils$ [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillDriver [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.trees.TreeNodeRef [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.MutableProjection [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveDeserializer [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.UpdateDelegationTokens [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV1_0 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.IDFModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.network.protocol.Encoders.ByteArrays [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDe.SparseMatrixPickler [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PartitionSpecContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.PCAModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.FetchRequest [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.SummarizerBuffer [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.UncacheTableContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.plans.logical.statsEstimation.EstimationUtils.OverlappedRange [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions.DslSymbol [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.Max 21/01/17 17:18:16 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.StateStoreAwareZipPartitionsRDD [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.StreamingJoinStrategy [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManager.ShuffleMetricsSource [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeansModel.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ComplexDataTypeContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.InMemoryScans [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.FixNullability [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.aggregate.ApproxCountDistinctForIntervals.LongArrayInternalRow [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.ALSWrapper.ALSWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ValueExpressionContext [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.Schema [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QuotedIdentifierAlternativeContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.FunctionArgumentConversion [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator10$1 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterStateResponse [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.InMemoryFileIndex.SerializableFileStatus [WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.StructAccessor [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.RFormulaModel.RFormulaModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.EnsembleModelReadWrite.$typecreator6$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ProbabilisticClassificationModel.$$typecreator5$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.Aggregation [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.sources.MemorySinkV2.AddedData [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetMemoryStatus Error instrumenting class:org.apache.spark.ml.source.libsvm.LibSVMFileFormat [WARN] Unable to detect inner functions for class:org.apache.spark.util.Benchmark.Case [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.AttributeSeq [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.$$typecreator2$1 Error instrumenting class:org.apache.spark.mllib.tree.model.TreeEnsembleModel$SaveLoadV1_0$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator10$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.HDFSBackedStateStore.UPDATING [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.WindowsContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.CrossValidator.CrossValidatorReader [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.recommendation.MatrixFactorizationModel.SaveLoadV1_0.$typecreator5$1 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestExecutors [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetLocations [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.UnsignedPrefixComparatorDesc [WARN] Unable to detect inner functions for class:org.apache.spark.ui.JettyUtils.ServletParams [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveAggAliasInGroupBy [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.QuantileDiscretizer.QuantileDiscretizerWriter Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$FileTypes Error instrumenting class:org.apache.spark.deploy.rest.RestSubmissionServer [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LinearSVCModel.LinearSVCWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ProbabilisticClassificationModel.$$typecreator6$1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDeBase.BasePickler [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Binarizer.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.KMeansModel.Cluster [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.KMeansModel.Cluster [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinHashLSHModel.MinHashLSHModelWriter.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.SpecialLimits [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeM2n [WARN] Unable to detect inner functions for class:org.apache.spark.util.Benchmark.Case [WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.SQLConf.PartitionOverwriteMode [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LinearSVCWrapper.LinearSVCWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.SparkConf.DeprecatedConfig [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterChanged [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.NaiveBayesWrapper.NaiveBayesWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.RandomForestClassifierWrapper.RandomForestClassifierWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.NaiveBayesModel.NaiveBayesModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.SubmitDriverResponse [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.MaxAbsScalerModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.StringIndexModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.Schema [WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.ArrowVectorAccessor [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel.MultilayerPerceptronClassificationModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GBTRegressionModel.GBTRegressionModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NamedWindowContext Error instrumenting class:org.apache.spark.sql.execution.command.CommandUtils$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.IdentityConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CacheTableContext [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.kafka010.CachedKafkaConsumer.CacheKey [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0.Data [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisteredExecutor [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.Shutdown [WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.ExternalShuffleBlockHandler.1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.streaming.InternalOutputModes.Update [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArray.SpillableArrayIterator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetMapConverter.$KeyValueConverter [WARN] Unable to detect inner functions for class:org.apache.spark.network.protocol.Encoders.StringArrays [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GaussianMixtureWrapper.GaussianMixtureWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.aggregate.TypedAverage.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.OneHotEncoderModel.OneHotEncoderModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.DecisionTreeClassifierWrapper.DecisionTreeClassifierWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManager.RemoteBlockTempFileManager [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StorageHandlerContext Error instrumenting class:org.apache.spark.input.Configurable [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.ChiSqSelectorModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.InMemoryStore.1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LDAWrapper.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.DataType.JSortedObject [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.Decimal.DecimalIsFractional [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.PowerIterationClustering.Assignment [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.DecimalType.Expression [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.RatingBlock Error instrumenting class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider [WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.OneForOneBlockFetcher.$ChunkCallback [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DropFunctionContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FrameBoundContext [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.SignedPrefixComparator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DescribeDatabaseContext [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.NodeData [WARN] Unable to detect inner functions for class:org.apache.spark.SparkConf.AlternateConfig [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.LinearRegressionModel.LinearRegressionModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisteredApplication [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.$typecreator6$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.TrainValidationSplit.TrainValidationSplitWriter [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.UnsignedPrefixComparatorNullsLast Error instrumenting class:org.apache.spark.sql.execution.datasources.csv.TextInputCSVDataSource$ [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.$typecreator8$1 [WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.ExternalShuffleBlockResolver.$2 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.VertexData [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ExecutorAdded [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateViewContext [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetBlockStatus [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveFunctions [WARN] Unable to detect inner functions for class:org.apache.spark.shuffle.sort.ShuffleInMemorySorter.SortComparator [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.StatefulAggregationStrategy [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.FPGrowthWrapper.FPGrowthWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableIdentifierContext Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$FileTypes$ [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDe.RatingPickler [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LDAModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.SplitData [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetOperationContext [WARN] Unable to detect inner functions for class:org.apache.spark.network.protocol.MessageDecoder.1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.LocalLDAModel.SaveLoadV1_0.Data$ [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GaussianMixtureWrapper.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.StateStore.MaintenanceTask [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerLatestState [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DescribeColNameContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.JoinTypeContext Error instrumenting class:org.apache.spark.sql.execution.datasources.orc.OrcFileFormat [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.DecisionTreeRegressionModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.joins.BuildSide [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRestModel.OneVsRestModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.unsafe.map.BytesToBytesMap.1 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ExecutorAdded [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.ALSWrapper.ALSWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestSubmitDriver [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.KMeansModelWriter.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ArithmeticBinaryContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.CatalogImpl.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableFileFormatContext [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.PredictData [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.joins.BuildRight [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.InConversion [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ColumnPruner.ColumnPrunerWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ExplainContext [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV1_0.$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.WriteStyle.FlattenStyle [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0.Data [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV1_0.$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.GeneralizedLinearRegressionModelWriter.Data Error instrumenting class:org.apache.spark.ui.JettyUtils$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.UseContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeKVExternalSorter.$KVSorterIterator [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalSorter.SpillableIterator [WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherServer.$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator6$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.SparkSession.implicits [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveOrdinalInOrderByAndGroupBy [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.ChiSqSelectorModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.attribute.AttributeType.Binary$2$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator2$1 Error instrumenting class:org.apache.spark.input.FixedLengthBinaryRecordReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.VectorIndexerModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.Pipeline.PipelineWriter Error instrumenting class:org.apache.spark.internal.io.HadoopMapRedWriteConfigUtil [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator8$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QueryPrimaryContext [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.StopAppClient [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.PowerIterationClusteringModel.SaveLoadV1_0 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.LongAccessor [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.CoalesceExec.EmptyPartition [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.NaiveBayesWrapper.NaiveBayesWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Identity [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.QueryExecution.debug [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.debug.DebugExec [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.GroupingSetContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.ShortAccessor [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.RatingBlock [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.MetricsAggregate [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.EnsembleModelReadWrite.$typecreator10$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QueryNoWithContext Error instrumenting class:org.apache.spark.sql.execution.datasources.PartitionPath$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ResourceContext [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetPeers [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.LevelDBTypeInfo.$Index [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.IsotonicRegressionModel.SaveLoadV1_0.Data$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.StructConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ConstantContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.KMeansModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions.DslExpression [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerSchedulerStateResponse [WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.ArrayAccessor [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Subscript [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveReferences [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.CoalesceExec.EmptyRDDWithPartitions [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.SignedPrefixComparatorDescNullsFirst [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.VectorIndexerModelWriter.Data Error instrumenting class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$StoreFile$ [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ExecutorStateChanged [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PredicateContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.MinMaxScalerModelWriter.$$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinHashLSHModel.MinHashLSHModelWriter.Data Error instrumenting class:org.apache.spark.sql.catalyst.parser.ParserUtils$EnhancedLogicalPlan$ [WARN] Unable to detect inner functions for class:org.apache.spark.shuffle.sort.ShuffleInMemorySorter.ShuffleSorterIterator [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalAppendOnlyMap.HashComparator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.ConcatCoercion [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestKillDriver [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestSubmitDriver [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.optimizer.JoinReorderDP.JoinPlan Error instrumenting class:org.apache.spark.deploy.worker.ui.WorkerWebUI [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.receiver.BlockGenerator.GeneratorState [WARN] Unable to detect inner functions for class:org.apache.spark.InternalAccumulator.shuffleWrite [WARN] Unable to detect inner functions for class:org.apache.spark.ml.python.MLSerDe.DenseMatrixPickler Error instrumenting class:org.apache.spark.metrics.MetricsSystem [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDe.DenseVectorPickler [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.IDFModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.LaunchExecutor [WARN] Unable to detect inner functions for class:org.apache.spark.ml.util.DefaultParamsReader.Metadata [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.GeneralizedLinearRegressionModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0.$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Interaction.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.xml.UDFXPathUtil.ReusableStringReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.AFTSurvivalRegressionModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StringLiteralContext [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeansModel.Data [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.BoundPortsResponse [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions.ImplicitAttribute [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel.BucketedRandomProjectionLSHModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.util.Utils.Lock [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.debug.DebugExec.$ColumnMetrics$ [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.BisectingKMeansModel.BisectingKMeansModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.BooleanValueContext [WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.mesos.MesosExternalShuffleClient.$RegisterDriverCallback [WARN] Unable to detect inner functions for class:org.apache.spark.sql.streaming.StreamingQueryListener.QueryStartedEvent [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.VertexData [WARN] Unable to detect inner functions for class:org.apache.spark.util.JsonProtocol.TASK_END_REASON_FORMATTED_CLASS_NAMES [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.FlatMapGroupsWithStateStrategy [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.EltCoercion [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.protocol.BlockTransferMessage.Type [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InsertOverwriteTableContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ExtractGenerator [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.CompleteRecovery [WARN] Unable to detect inner functions for class:org.apache.spark.graphx.impl.ShippableVertexPartition.ShippableVertexPartitionOpsConstructor [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.$$typecreator22$1 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterWorker [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.FPGrowthWrapper.FPGrowthWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.StandardScalerModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.EquivalentExpressions.Expr [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.OpenHashMapBasedStateMap.LimitMarker [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SampleByPercentileContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.StringIndexerModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ComplexColTypeListContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.$$typecreator2$1 Error instrumenting class:org.apache.spark.sql.execution.datasources.json.JsonFileFormat [WARN] Unable to detect inner functions for class:org.apache.spark.status.ElementTrackingStore.Trigger [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveSubqueryColumnAliases [WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.FetchResult [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.BinaryType.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Gaussian [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowColumnsContext [WARN] Unable to detect inner functions for class:org.apache.spark.network.util.LevelDBProvider.StoreVersion [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.UncompressedInBlockSort [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LDA.LDAReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAssembler.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.EquivalentExpressions.Expr [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.VectorIndexerModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Log [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ApplicationFinished [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.ResolveHints.RemoveAllHints Error instrumenting class:org.apache.spark.sql.execution.streaming.FileStreamSinkLog [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.Word2VecModel.SaveLoadV1_0.$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Tweedie [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalAppendOnlyMap.ExternalIterator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.LogicalNotContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.StandardScalerModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.util.SizeEstimator.ClassInfo [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.LaunchTask [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.RepeatedConverter [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.HasCachedBlocks [WARN] Unable to detect inner functions for class:org.apache.spark.graphx.PartitionStrategy.RandomVertexCut [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.HintContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.StandardScalerModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.scheduler.StreamingListenerBus.WrappedStreamingListenerEvent [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.PowerIterationClustering.Assignment [WARN] Unable to detect inner functions for class:org.apache.spark.unsafe.types.UTF8String.IntWrapper [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Word2VecModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterClusterManager [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GBTClassifierWrapper.GBTClassifierWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.network.server.OneForOneStreamManager.StreamState [WARN] Unable to detect inner functions for class:org.apache.spark.sql.RelationalGroupedDataset.GroupByType [WARN] Unable to detect inner functions for class:org.apache.spark.unsafe.types.UTF8String.LongWrapper [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FileFormatContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionModel.IsotonicRegressionModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.$$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.HasCachedBlocks [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.StreamingRelationStrategy [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.ParserUtils.EnhancedLogicalPlan [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetLocationsMultipleBlockIds [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterWorkerFailed [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.IsotonicRegressionModel.SaveLoadV1_0.$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IdentifierListContext Error instrumenting class:org.apache.spark.mllib.clustering.DistributedLDAModel$SaveLoadV1_0$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.InMemoryFileIndex.SerializableBlockLocation [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyWithIndexToValueStore [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ColTypeListContext [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.ArrayWrappers.ComparableIntArray [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateKeyWatermarkPredicate [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Named [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SortPrefixUtils.NoOpPrefixComparator Error instrumenting class:org.apache.spark.deploy.security.HadoopFSDelegationTokenProvider [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAttributeRewriter.VectorAttributeRewriterReader [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.CheckForWorkerTimeOut [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetDecimalConverter [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.KVTypeInfo.$MethodAccessor [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.ReviveOffers [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.BooleanBitSet.Decoder [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterWorker [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetLongDictionaryAwareDecimalConverter [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.OutputCommitCoordinator.OutputCommitCoordinatorEndpoint [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.TreeEnsembleModel.SaveLoadV1_0.EnsembleNodeData [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.DoubleConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator5$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.LeastSquaresNESolver [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.MinMaxScalerModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.MetricsAggregate [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.FileBasedWriteAheadLog.LogInfo [WARN] Unable to detect inner functions for class:org.apache.spark.ExecutorAllocationManager.ExecutorAllocationListener Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat Error instrumenting class:org.apache.spark.metrics.sink.MetricsServlet [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TablePropertyListContext [WARN] Unable to detect inner functions for class:org.apache.spark.util.JsonProtocol.SPARK_LISTENER_EVENT_FORMATTED_CLASS_NAMES [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StopWordsRemover.$$typecreator4$1 [WARN] Unable to detect inner functions for class:org.apache.spark.serializer.SerializationDebugger.ListObjectOutputStream [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ProbabilisticClassificationModel.$$typecreator4$1 [WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherProtocol.Message [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestDriverStatus [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator11$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.NumNonZeros [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.NullIntolerant [WARN] Unable to detect inner functions for class:org.apache.spark.sql.RelationalGroupedDataset.PivotType [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.MinMaxScalerModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.CrossValidatorModel.CrossValidatorModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.storage.DiskBlockObjectWriter.$ManualCloseBufferedOutputStream$1 [WARN] Unable to detect inner functions for class:org.apache.spark.launcher.Main.1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.IntAccessor [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.VariableLengthRowBasedKeyValueBatch.$1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.PythonMLLibAPI.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.IntegerType.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.BooleanBitSet.Encoder [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Poisson [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.DecisionTreeRegressionModel.DecisionTreeRegressionModelWriter.$$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.optim.WeightedLeastSquares.QuasiNewton [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeExternalRowSorter.PrefixComputer.Prefix [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.KMeansWrapper.KMeansWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.StringAccessor [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.LinearRegressionModel.LinearRegressionModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.CatalogImpl.$$typecreator3$1 Error instrumenting class:org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$ Error instrumenting class:org.apache.spark.sql.execution.streaming.state.StateStore$ [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.NaiveBayesModel.NaiveBayesModelWriter Error instrumenting class:org.apache.spark.sql.execution.streaming.HDFSMetadataLog [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.LaunchExecutor [WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.RetryingBlockFetcher.1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RelationContext [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillExecutors [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionModel.IsotonicRegressionModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator9$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRestModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Probit [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SampleMethodContext [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.SubmitDriverResponse [WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.BinaryAccessor [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PartitionSpecLocationContext Error instrumenting class:org.apache.spark.sql.execution.streaming.StreamMetadata$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.IntDelta.Encoder [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LDAModel.$$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.LocalPrefixSpan.ReversedPrefix [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IdentifierCommentContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.RightSide Error instrumenting class:org.apache.spark.ui.ServerInfo [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StopWordsRemover.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionModel.IsotonicRegressionModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.FileBasedWriteAheadLog.LogInfo [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveWorker [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IdentifierContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.RFormulaModel.RFormulaModelWriter.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.TriggerThreadDump [WARN] Unable to detect inner functions for class:org.apache.spark.network.protocol.Encoders.Strings [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.FileStreamSource.FileEntry [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.WindowsSubstitution [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalAppendOnlyMap.DiskMapIterator [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.GradientBoostedTreesModel.SaveLoadV1_0 Error instrumenting class:org.apache.spark.sql.execution.datasources.NoopCache$ [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.ChiSquareTest.$typecreator4$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeExternalRowSorter.$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutorsOnHost [WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.protocol.BlockTransferMessage.Decoder [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.CholeskySolver [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper.GeneralizedLinearRegressionWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StopDriver [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.BlockLocationsAndStatus Error instrumenting class:org.apache.spark.sql.execution.datasources.FileFormatWriter$DynamicPartitionWriteTask [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.JoinSelection [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.PrefixSpan.Postfix [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.BucketSpecContext [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.DriverStateChanged [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinConditionSplitPredicates [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeKVExternalSorter.1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.IDF.DocumentFrequencyAggregator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DoubleLiteralContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.GaussianMixtureModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutorFailed [WARN] Unable to detect inner functions for class:org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette.$typecreator1$1 Error instrumenting class:org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LogisticRegressionModel.LogisticRegressionModelWriter.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.serializer.SerializationDebugger.SerializationDebugger [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.receiver.BlockGenerator.Block [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.CountVectorizerModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.FailureFetchResult [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FunctionCallContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorSlicer.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeM2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.MemorySink.AddedData [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.ReplicateBlock [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillExecutors [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SubqueryExpressionContext [WARN] Unable to detect inner functions for class:org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassReflection [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.debug.DebugQuery [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ImputerModel.ImputerReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateWatermarkPredicates [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.DecimalType.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveShuffle [WARN] Unable to detect inner functions for class:org.apache.spark.network.crypto.TransportCipher.EncryptedMessage [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.HashingTF.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.LongDelta.Decoder [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.optimizer.StarSchemaDetection.TableAccessCardinality [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.DirectKafkaRateController [WARN] Unable to detect inner functions for class:org.apache.spark.unsafe.map.BytesToBytesMap.$Location [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase.IntIterator [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.$$typecreator5$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.BisectingKMeansWrapper.BisectingKMeansWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.RowUpdater [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LocalLDAModel.LocalLDAModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel.MultilayerPerceptronClassificationModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ConstantDefaultContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DescribeTableContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.DictionaryEncoding.Decoder [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.ArrayWrappers.ComparableByteArray [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.InBlock [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.OldData [WARN] Unable to detect inner functions for class:org.apache.spark.AccumulatorParam.FloatAccumulatorParam [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.EnsembleModelReadWrite.$typecreator5$1 [WARN] Unable to detect inner functions for class:org.apache.spark.storage.StorageStatus.RddStorageInfo [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.LateralViewContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAttributeRewriter.VectorAttributeRewriterWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ComplexColTypeContext [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveExecutor [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.InMemoryStore.InMemoryView [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.RepeatedPrimitiveConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.ShortType.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Binarizer.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.StandardScalerModelWriter.$$typecreator3$1 Error instrumenting class:org.apache.spark.mllib.regression.IsotonicRegressionModel$SaveLoadV1_0$Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Inverse [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.stat.test.ChiSqTest.Method [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ClearCacheContext [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalSorter.SpilledFile [WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator4$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.python.EvaluatePython.StructTypePickler [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.IDFModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel.BucketedRandomProjectionLSHModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.OneHotEncoder.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Sqrt [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.DirectKafkaInputDStreamCheckpointData [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.ArrayConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.plans.logical.statsEstimation.EstimationUtils.OverlappedRange [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.LastContext [WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SparkSaslClient.1 Error instrumenting class:org.apache.spark.ml.image.SamplePathFilter$ [WARN] Unable to detect inner functions for class:org.apache.spark.graphx.PartitionStrategy.EdgePartition1D [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.ChiSqSelectorModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.UpdateDelegationTokens [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.history.HistoryServerDiskManager.Lease [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.aggregate.HashMapGenerator.Buffer [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.AddWebUIFilter [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ReconnectWorker [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TablePropertyContext [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerStateResponse [WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.SQLConf.Deprecated [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ResetConfigurationContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator13$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IdentifierSeqContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator8$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel.BucketedRandomProjectionLSHModelWriter.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveRelations [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DropDatabaseContext [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.ProgressReporter.ExecutionStats [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.RunLengthEncoding.Encoder [WARN] Unable to detect inner functions for class:org.apache.spark.graphx.PartitionStrategy.EdgePartition2D [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.OldData [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TinyIntLiteralContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.StructConverter [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.TimSort.$SortState [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.IsotonicRegressionWrapper.IsotonicRegressionWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ColumnarBatch.$1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0.Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ClassificationModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetBinaryDictionaryAwareDecimalConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.rules.RuleExecutor.FixedPoint [WARN] Unable to detect inner functions for class:org.apache.spark.rpc.netty.NettyRpcEnv.FileDownloadChannel [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyWithIndexAndValue [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.BooleanConverter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.FamilyAndLink [WARN] Unable to detect inner functions for class:org.apache.spark.util.sketch.CountMinSketch.Version [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.debug.DebugExec.$ColumnMetrics [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.VectorizedRleValuesReader.1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.TrainValidationSplitModel.TrainValidationSplitModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.PrefixSpan.Postfix Error instrumenting class:org.apache.spark.sql.catalyst.util.CompressionCodecs$ [WARN] Unable to detect inner functions for class:org.apache.spark.executor.Executor.TaskRunner [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase.RLEIntIterator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateTableHeaderContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.streaming.StreamingQueryListener.QueryProgressEvent [WARN] Unable to detect inner functions for class:org.apache.spark.rdd.HadoopRDD.HadoopMapPartitionsWithSplitRDD [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillExecutor [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.WindowSpecContext [WARN] Unable to detect inner functions for class:org.apache.spark.io.ReadAheadInputStream.$1 [WARN] Unable to detect inner functions for class:org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassMethods [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.PCAModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.DoublePrefixComparator Error instrumenting class:org.apache.spark.sql.execution.datasources.orc.OrcColumnarBatchReader [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ExecutorUpdated [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.LoadDataContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.WriteStyle.QuotedStyle [WARN] Unable to detect inner functions for class:org.apache.spark.ml.optim.WeightedLeastSquares.Auto [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.LevelDB.$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.LinearRegressionModel.LinearRegressionModelWriter.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeNNZ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.DateType.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.AFTSurvivalRegressionWrapper.AFTSurvivalRegressionWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.WhenClauseContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary.$$typecreator4$1 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisteredWorker [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.MaxAbsScalerModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.Heartbeat [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ChangeColumnContext [WARN] Unable to detect inner functions for class:org.apache.spark.launcher.SparkSubmitCommandBuilder.$OptionParser [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator25$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ManageResourceContext [WARN] Unable to detect inner functions for class:org.apache.spark.ExecutorAllocationManager.ExecutorAllocationManagerSource [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IntervalLiteralContext [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveBroadcast [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.TreeEnsembleModel.SaveLoadV1_0.Metadata$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IntervalFieldContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ArithmeticOperatorContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.MultiInsertQueryBodyContext [WARN] Unable to detect inner functions for class:org.apache.spark.network.crypto.TransportCipher.DecryptionHandler [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveBlock [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions.DslAttribute [WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.ExternalShuffleBlockResolver.$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.FileStreamSource.SeenFilesMap [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RowFormatSerdeContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.debug.DebugExec.$SetAccumulator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.JoinCriteriaContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ValueExpressionDefaultContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LDAWrapper.$$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.MultilayerPerceptronClassifierWrapper.MultilayerPerceptronClassifierWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator19$1 Error instrumenting class:org.apache.spark.mllib.tree.model.TreeEnsembleModel$SaveLoadV1_0$Metadata [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.UnregisterApplication [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Link [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleExpressionContext [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.LevelDBTypeInfo.1 [WARN] Unable to detect inner functions for class:org.apache.spark.SparkConf.AlternateConfig Error instrumenting class:org.apache.spark.sql.execution.streaming.FileStreamSourceLog [WARN] Unable to detect inner functions for class:org.apache.spark.util.random.StratifiedSamplingUtils.RandomDataGenerator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.LongType.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.NaiveBayesModel.NaiveBayesModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.api.python.SerDeUtil.AutoBatchedPickler [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.AFTSurvivalRegressionModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveExecutor [WARN] Unable to detect inner functions for class:org.apache.spark.ml.param.shared.SharedParamsCodeGen.ParamDesc [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.CountVectorizerModelWriter.$$typecreator3$1 Error instrumenting class:org.apache.spark.ui.ServerInfo$ [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.IsotonicRegressionWrapper.IsotonicRegressionWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.$$typecreator38$1 [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.GetExecutorLossReason [WARN] Unable to detect inner functions for class:org.apache.spark.storage.StorageStatus.NonRddStorageInfo Error instrumenting class:org.apache.spark.sql.execution.streaming.FileStreamSink$ [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeMean [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRestModel.$$typecreator5$1 [WARN] Unable to detect inner functions for class:org.apache.spark.util.JsonProtocol.JOB_RESULT_FORMATTED_CLASS_NAMES Error instrumenting class:org.apache.spark.ui.WebUI [WARN] Unable to detect inner functions for class:org.apache.spark.status.KVUtils.KVStoreScalaSerializer [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.NormL1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PrimaryExpressionContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.IdentityConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ArithmeticUnaryContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.CLogLog [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCheckResult.TypeCheckSuccess Error instrumenting class:org.apache.spark.sql.execution.streaming.CompactibleFileStreamLog [WARN] Unable to detect inner functions for class:org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette.ClusterStats [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexer.CategoryStats [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.FPGrowthModel.SaveLoadV1_0 [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalSorter.SpilledFile [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase.ValuesReaderIntIterator [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.PCAModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SampleByBucketContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SearchedCaseContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetConfigurationContext [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.RevokedLeadership [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.RFormulaModel.RFormulaModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.BatchedWriteAheadLog.Record [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveExecutor [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ColPositionContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.util.QuantileSummaries.Stats [WARN] Unable to detect inner functions for class:org.apache.spark.graphx.lib.SVDPlusPlus.Conf [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GaussianMixtureWrapper.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.RetryingBlockFetcher.$RetryingBlockFetchListener [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.DoubleType.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SampleByRowsContext [WARN] Unable to detect inner functions for class:org.apache.spark.shuffle.sort.UnsafeShuffleWriter.$CloseAndFlushShieldOutputStream [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.orc.OrcDeserializer.CatalystDataUpdater [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NonReservedContext [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0.$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.ElectedLeader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ExistsContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.UncompressedInBlock [WARN] Unable to detect inner functions for class:org.apache.spark.launcher.CommandBuilderUtils.JavaVendor [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GBTRegressorWrapper.GBTRegressorWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RenameTableContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Interaction.$$typecreator2$1 Error instrumenting class:org.apache.spark.executor.ExecutorSource Error instrumenting class:org.apache.spark.sql.execution.datasources.FileFormatWriter$ [WARN] Unable to detect inner functions for class:org.apache.spark.TestUtils.JavaSourceFromString [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.DistributedLDAModel.DistributedWriter Error instrumenting class:org.apache.spark.sql.execution.datasources.json.MultiLineJsonDataSource$ [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NullLiteralContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TruncateTableContext [WARN] Unable to detect inner functions for class:org.apache.spark.AccumulatorParam.DoubleAccumulatorParam [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StarContext [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RequestExecutors [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.ChiSqSelectorModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowPartitionsContext [WARN] Unable to detect inner functions for class:org.apache.spark.AccumulatorParam.LongAccumulatorParam [WARN] Unable to detect inner functions for class:org.apache.spark.ml.attribute.AttributeType.Nominal$2$ [WARN] Unable to detect inner functions for class:org.apache.spark.api.python.BasePythonRunner.ReaderIterator [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.OutputCommitCoordinator.StageState [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestKillDriver [WARN] Unable to detect inner functions for class:org.apache.spark.sql.SparkSession.Builder [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.python.EvaluatePython.RowPickler [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.TimSort.1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayes.$$typecreator9$1 Error instrumenting class:org.apache.spark.input.StreamRecordReader [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator6$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeExternalRowSorter.RowComparator [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeans.ClusterSummary [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.PassThrough.Decoder Error instrumenting class:org.apache.spark.sql.execution.datasources.SQLHadoopMapReduceCommitProtocol [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.BasicOperators [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0.$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.DistributedLDAModel.DistributedLDAModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0.$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.LaunchTask [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.BoundPortsRequest [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.v2.PushDownOperatorsToDataSource.FilterAndProject [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.DataSource.SourceInfo [WARN] Unable to detect inner functions for class:org.apache.spark.launcher.AbstractLauncher.ArgumentValidator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SimpleCaseContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.EnsembleModelReadWrite.$typecreator11$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveWindowFrame [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NestedConstantListContext Error instrumenting class:org.apache.spark.api.python.JavaToWritableConverter Error instrumenting class:org.apache.spark.sql.execution.datasources.PartitionDirectory$ [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterClusterManager [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Family [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QueryOrganizationContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.FamilyAndLink [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkDirCleanup [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.NaiveBayesModel.NaiveBayesModelWriter.Data Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.$SpillableIterator [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalAppendOnlyMap.ExternalIterator.$StreamBuffer [WARN] Unable to detect inner functions for class:org.apache.spark.api.python.BasePythonRunner.WriterThread [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.MaxAbsScalerModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.rpc.netty.RpcEndpointVerifier.CheckExistence [WARN] Unable to detect inner functions for class:org.apache.spark.ml.PipelineModel.PipelineModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.mesos.MesosExternalShuffleClient.$Heartbeater [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel.MultilayerPerceptronClassificationModelWriter.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRest.OneVsRestReader [WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.ExternalShuffleBlockHandler.$ManagedBufferIterator [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ProbabilisticClassificationModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator7$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.CatalogImpl.$$typecreator4$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.NaiveBayesModel.NaiveBayesModelWriter.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.NNLSSolver [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeans.ClusterSummary [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.DecisionTreeRegressionModel.DecisionTreeRegressionModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.network.util.NettyUtils.1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateTableLikeContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.python.MLSerDe.DenseVectorPickler [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ColumnPruner.ColumnPrunerWriter [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.InMemoryStore.InMemoryIterator [WARN] Unable to detect inner functions for class:org.apache.spark.api.python.SerDeUtil.ByteArrayConstructor [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.LocalLDAModel.SaveLoadV1_0.$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinSide [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.DictionaryEncoding.Encoder [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.catalog.InMemoryCatalog.TableDesc [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.dstream.FileInputDStream.FileInputDStreamCheckpointData [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinHashLSHModel.MinHashLSHModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SkewSpecContext [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.GetExecutorLossReason Error instrumenting class:org.apache.spark.mllib.clustering.GaussianMixtureModel$SaveLoadV1_0$Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.aggregate.ApproxCountDistinctForIntervals.LongArrayInternalRow [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.BooleanDefaultContext Error instrumenting class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$StoreFile [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.Projection [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.RandomForestRegressorWrapper.RandomForestRegressorWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.AccumulatorParam.IntAccumulatorParam [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.LinearRegressionModel.LinearRegressionModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.Word2VecModel.SaveLoadV1_0.Data [WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherBackend.BackendConnection [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateTableContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetIntDictionaryAwareDecimalConverter Error instrumenting class:org.apache.spark.mllib.clustering.DistributedLDAModel$SaveLoadV1_0$EdgeData [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.PrefixSpan.Prefix [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.BooleanType.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetExecutorEndpointRef [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveExecutor [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.HintStatementContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LinearSVCWrapper.LinearSVCWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.rdd.JdbcRDD.ConnectionFactory [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.OrderedIdentifierListContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeKVExternalSorter.KVComparator [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAssembler.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.annotation.InterfaceStability.Stable [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateValueWatermarkPredicate [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveWindowOrder [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.BlacklistTracker.ExecutorFailureList.TaskId [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.TreeEnsembleModel.SaveLoadV1_0.$typecreator1$1 Error instrumenting class:org.apache.spark.sql.execution.datasources.json.TextInputJsonDataSource$ [WARN] Unable to detect inner functions for class:org.apache.spark.ml.optim.WeightedLeastSquares.Solver [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TablePropertyValueContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NamedExpressionSeqContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator11$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FunctionIdentifierContext [WARN] Unable to detect inner functions for class:org.apache.spark.network.util.LevelDBProvider.1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator5$1 [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.AddWebUIFilter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AddTableColumnsContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleDataTypeContext [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutorFailed [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.joins.BuildLeft [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Wildcard [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetTablePropertiesContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator12$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCheckResult.TypeCheckFailure [WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SaslEncryption.EncryptionHandler [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionBase.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorSizeHint.$$typecreator4$1 [WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.FailureFetchResult [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowTableContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.WindowFrameCoercion [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveAliases [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.SignedPrefixComparatorNullsLast Error instrumenting class:org.apache.spark.sql.execution.datasources.InMemoryFileIndex$ [WARN] Unable to detect inner functions for class:org.apache.spark.serializer.SerializationDebugger.ListObjectOutput [WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherServer.$ServerConnection [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RowFormatContext [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetLocationsAndStatus [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.FPTree.Node [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions.DslString Error instrumenting class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore Error instrumenting class:org.apache.spark.api.python.TestOutputValueConverter [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.RadixSortSupport [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator6$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.PartitioningUtils.PartitionValues [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterStateResponse [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ExtractGenerator.AliasedGenerator$ [WARN] Unable to detect inner functions for class:org.apache.spark.ml.PipelineModel.PipelineModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.AFTSurvivalRegressionWrapper.AFTSurvivalRegressionWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV1_0.$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator15$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StatementDefaultContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IndexToString.$$typecreator4$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveGenerate [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.FlatMapGroupsWithStateExec.StateStoreUpdater [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ReconnectWorker [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Binomial [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArray.ExternalAppendOnlyUnsafeRowArrayIterator [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.BlockLocationsAndStatus [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator4$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QualifiedNameContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions.StringToAttributeConversionHelper [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.MinMaxScalerModelWriter.Data Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.PullOutNondeterministic [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.DriverStateChanged [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.JoinRelationContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FirstContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.KMeansModelReader.$$typecreator6$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InsertIntoTableContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetTableLocationContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.LogicalBinaryContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.ByteAccessor [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.FileFormatWriter.WriteTaskResult [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.rules.RuleExecutor.Batch [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetStorageStatus [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.rdd.NewHadoopRDD.NewHadoopMapPartitionsWithSplitRDD [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.UnsafeInMemorySorter.1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.StateStoreOps [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.util.QuantileSummaries.Stats [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.$typecreator7$1 [WARN] Unable to detect inner functions for class:org.apache.spark.AccumulatorParam.StringAccumulatorParam [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorSizeHint.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.EdgeData$ [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDe.DenseMatrixPickler [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SubscriptContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.api.r.SQLUtils.RegexContext [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterChangeAcknowledged [WARN] Unable to detect inner functions for class:org.apache.spark.api.python.PythonWorkerFactory.MonitorThread [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.ResolveTableValuedFunctions.ArgumentList [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FunctionTableContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QueryContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinHashLSHModel.MinHashLSHModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerStateResponse [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionModel.IsotonicRegressionModelWriter.$$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.IsotonicRegressionModel.SaveLoadV1_0.$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.storage.DiskBlockObjectWriter.ManualCloseOutputStream Error instrumenting class:org.apache.spark.streaming.StreamingContext$ [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.FeatureHasher.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeWeightSum [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterWorkerResponse [WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.DecimalAccessor [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0.Data$ Error instrumenting class:org.apache.spark.sql.catalyst.catalog.InMemoryCatalog$ Error instrumenting class:org.apache.spark.streaming.api.java.JavaStreamingContext$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinConditionSplitPredicates [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.recommendation.MatrixFactorizationModel.SaveLoadV1_0 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.DecimalConverter [WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SaslEncryption.DecryptionHandler [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.InMemoryStore.InstanceList [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.$$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.Metric [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.FileFormatWriter.OutputSpec [WARN] Unable to detect inner functions for class:org.apache.spark.serializer.SerializationDebugger.NullOutputStream [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillDriverResponse [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0.$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.codegen.CodegenContext.MutableStateArrays [WARN] Unable to detect inner functions for class:org.apache.spark.rdd.PipedRDD.NotEqualsFileNameFilter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableNameContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.codegen.DumpByteCode [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DropTablePartitionsContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.Variance [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0 [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.kafka010.KafkaRDD.KafkaRDDIterator [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.SparkSubmitUtils.MavenCoordinate [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateTempViewUsingContext [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.BeginRecovery [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.PythonMLLibAPI.$$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.rpc.netty.NettyRpcEnv.FileDownloadCallback [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.StringConverter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.$$typecreator30$1 Error instrumenting class:org.apache.spark.sql.execution.datasources.csv.CSVFileFormat [WARN] Unable to detect inner functions for class:org.apache.spark.util.Benchmark.Result [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateWatermarkPredicates [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.AFTSurvivalRegressionModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.KVTypeInfo.$FieldAccessor [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Power [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Word2VecModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.$$typecreator14$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.GBTClassificationModel.GBTClassificationModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAttributeRewriter.VectorAttributeRewriterWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.serializer.DummySerializerInstance.$1 Error instrumenting class:org.apache.spark.input.ConfigurableCombineFileRecordReader [WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.FetchRequest [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.FPTree.Summary [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateFileFormatContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAttributeRewriter.VectorAttributeRewriterWriter.$$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.scheduler.JobScheduler.JobHandler [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Named [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.$$typecreator38$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.RandomForestRegressionModel.RandomForestRegressionModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.HandleNullInputsForUDF [WARN] Unable to detect inner functions for class:org.apache.spark.network.protocol.Message.Type [WARN] Unable to detect inner functions for class:org.apache.spark.graphx.impl.VertexPartition.VertexPartitionOpsConstructor [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.FileFormatWriter.WriteTaskResult [WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.OneForOneBlockFetcher.$DownloadCallback [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.GeneralizedLinearRegressionModelWriter.$$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.internal.io.FileCommitProtocol.EmptyTaskCommitMessage [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.LevelDB.TypeAliases [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.DecisionTreeRegressionModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.FileFormatWriter.ExecuteWriteTask [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeMetric Error instrumenting class:org.apache.spark.sql.execution.streaming.SinkFileStatus$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetArrayConverter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.ChiSqSelectorModelWriter.$$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinHashLSHModel.MinHashLSHModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.StackCoercion [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ColumnPruner.ColumnPrunerWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.api.python.BasePythonRunner.MonitorThread [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator4$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.VectorIndexerModelWriter.$$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.PowerIterationClusteringModel.SaveLoadV1_0.$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.GlobalAggregates Error instrumenting class:org.apache.spark.serializer.SerializationDebugger$ObjectStreamClassMethods$ [WARN] Unable to detect inner functions for class:org.apache.spark.util.SizeEstimator.SearchState [WARN] Unable to detect inner functions for class:org.apache.spark.InternalAccumulator.input [WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.ExternalShuffleBlockHandler.$ShuffleMetrics [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateWatermarkPredicate [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LinearSVCModel.LinearSVCReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.orc.OrcDeserializer.RowUpdater [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.FileFormatWriter.WriteJobDescription Error instrumenting class:org.apache.spark.status.api.v1.ApiRootResource$ [WARN] Unable to detect inner functions for class:org.apache.spark.InternalAccumulator.shuffleRead [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.WindowFrameContext [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ExecutorUpdated [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.UDTConverter Error instrumenting class:org.apache.spark.mllib.clustering.LocalLDAModel$SaveLoadV1_0$Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.RandomForestRegressorWrapper.RandomForestRegressorWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InlineTableDefault1Context [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.aggregate.HashMapGenerator.Buffer [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Word2VecModelWriter.$$typecreator10$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.TimestampType.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LogisticRegressionModel.LogisticRegressionModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyAndNumValues [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.EnsembleModelReadWrite.EnsembleNodeData [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveAggregateFunctions [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InlineTableDefault2Context [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutors [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.DecisionTreeClassifierWrapper.DecisionTreeClassifierWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ReregisterWithMaster [WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.TimestampAccessor [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.UnsafeInMemorySorter.SortComparator [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.Rating [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.receiver.ReceiverSupervisor.ReceiverState [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RenameTablePartitionContext [WARN] Unable to detect inner functions for class:org.apache.spark.security.CryptoStreamUtils.CryptoParams [WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherProtocol.Stop [WARN] Unable to detect inner functions for class:org.apache.spark.util.sketch.BloomFilter.Version [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NumberContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.VectorizedRleValuesReader.MODE [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.KeyWrapper [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.LongConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AlterViewQueryContext [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.stat.test.ChiSqTest.NullHypothesis [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DescribeFuncNameContext Error instrumenting class:org.apache.spark.SparkContext$ [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GaussianMixtureWrapper.GaussianMixtureWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.NormalEquation Error instrumenting class:org.apache.spark.sql.execution.datasources.CodecStreams$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLContext.implicits [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.AFTSurvivalRegressionModelWriter.$$typecreator4$1 [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.OpenHashMapBasedStateMap.StateInfo [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleStatementContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.BooleanLiteralContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.UncompressedInBlockBuilder [WARN] Unable to detect inner functions for class:org.apache.spark.status.ElementTrackingStore.Trigger [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0.Data [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisteredApplication [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.BlacklistTracker.ExecutorFailureList.TaskId [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRest.OneVsRestWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.aggregate.ApproximatePercentile.PercentileDigestSerializer [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RefreshResourceContext [WARN] Unable to detect inner functions for class:org.apache.spark.unsafe.map.HashMapGrowthStrategy.Doubling [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LocalLDAModel.LocalLDAModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.$$typecreator6$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.impl.RandomForest.NodeIndexInfo [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.OneHotEncoderModel.OneHotEncoderModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0.$typecreator4$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleFunctionIdentifierContext [WARN] Unable to detect inner functions for class:org.apache.spark.status.KVUtils.MetadataMismatchException [WARN] Unable to detect inner functions for class:org.apache.spark.unsafe.map.BytesToBytesMap.$MapIterator [WARN] Unable to detect inner functions for class:org.apache.spark.ml.optim.QuasiNewtonSolver.NormalEquationCostFun [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.Mean [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowTablesContext [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetLocationsMultipleBlockIds [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.CountVectorizerModelWriter.Data Error instrumenting class:org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RequestExecutors [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.BeginRecovery [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ApplicationRemoved [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ColumnPruner.ColumnPrunerWriter.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.StateStoreHandler [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.DecisionTreeClassificationModel.DecisionTreeClassificationModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.DecisionTreeModelReadWrite.$typecreator4$1 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerSchedulerStateResponse [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.OpenHashSet.LongHasher [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestMasterState Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport Error instrumenting class:org.apache.spark.ui.SparkUI [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveWorker [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.aggregate.DeclarativeAggregate.RichAttribute Error instrumenting class:org.apache.spark.sql.catalyst.catalog.ExternalCatalogUtils$ [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.Data [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.SizeTracker.Sample [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ConstantListContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillTask [WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherProtocol.SetAppId [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.ToBlockManagerMaster [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRestModel.OneVsRestModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeL1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.ConcatCoercion [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ColumnPruner.ColumnPrunerReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveUpCast [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolvePivot [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArray.InMemoryBufferIterator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.stat.FrequentItems.FreqItemCounter [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.StringPrefixComparator [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.BatchedWriteAheadLog.Record [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.Word2VecModel.SaveLoadV1_0.$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.GenericFileFormatContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetTableSerDeContext [WARN] Unable to detect inner functions for class:org.apache.spark.shuffle.sort.UnsafeShuffleWriter.MyByteArrayOutputStream [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.WindowRefContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowTblPropertiesContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LocalLDAModel.LocalLDAModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.UDTConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.ShortConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.IfCoercion [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetStringConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.streaming.StreamingQueryListener.Event [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.FPGrowth.FreqItemset [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.RandomForestClassificationModel.RandomForestClassificationModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterApplication [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.NormL2 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeMin [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.$1 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.history.AppListingListener.MutableApplicationInfo [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StatementContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AliasedQueryContext [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.LaunchDriver [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.Decimal.DecimalIsConflicted [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.GaussianMixtureModelWriter.$$typecreator5$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.IDFModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManager.RemoteBlockTempFileManager.$ReferenceWithCleanup [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SaslEncryption.EncryptedMessage [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.OutputCommitCoordinator.StageState [WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.ExternalShuffleBlockResolver.AppExecId [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetBlockStatus [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SampleContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator14$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PositionContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.IntConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DecimalLiteralContext [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDe.SparseVectorPickler [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Word2VecModelReader.$$typecreator17$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.attribute.AttributeType.Unresolved$2$ [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.GaussianMixtureModel.SaveLoadV1_0.Data$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.FileStreamSource.FileEntry [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.BigDecimalLiteralContext [WARN] Unable to detect inner functions for class:org.apache.spark.rpc.netty.Dispatcher.MessageLoop [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetPeers [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Gamma Error instrumenting class:org.apache.spark.input.WholeTextFileRecordReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.RatingBlockBuilder Error instrumenting class:org.apache.spark.internal.io.HadoopMapReduceCommitProtocol [WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.StringToColumn [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveBroadcast [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.PredictData [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LocalLDAModel.LocalLDAModelWriter.$$typecreator4$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.MinMaxScalerModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RowFormatDelimitedContext [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0.$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherProtocol.SetState [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.LocalPrefixSpan.ReversedPrefix [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.OneHotEncoderModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.BoundPortsResponse [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator11$1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0.$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.GaussianMixtureModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.RandomForestModel.SaveLoadV1_0 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.FloatAccessor [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ExecutorStateChanged [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.LinearRegressionModel.LinearRegressionModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalAppendOnlyMap.SpillableIterator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyWithIndexToValueType [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.ReplicateBlock [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator15$1 [WARN] Unable to detect inner functions for class:org.apache.spark.network.server.TransportRequestHandler.$1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.PrefixSpanModel.SaveLoadV1_0 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ApplicationFinished [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GBTClassifierWrapper.GBTClassifierWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PrimitiveDataTypeContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.DecimalType.Fixed [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DescribeFunctionContext [WARN] Unable to detect inner functions for class:org.apache.spark.storage.StorageStatus.RddStorageInfo [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.LogisticRegressionWithLBFGS.$$typecreator1$1 Error instrumenting class:org.apache.spark.sql.execution.datasources.text.TextFileFormat [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowCreateTableContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Word2VecModelWriter.$$typecreator9$1 Created : .generated-mima-class-excludes in current directory. Created : .generated-mima-member-excludes in current directory. Using /usr/lib/jvm/java-8-openjdk-amd64/ as default JAVA_HOME. Note, this will be overridden by -java-home if it is set. [info] Loading project definition from /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/project [info] Set current project to spark-parent (in build file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/) [info] Updating {file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/}tools... [info] spark-parent: previous-artifact not set, not analyzing binary compatibility [info] spark-tags: previous-artifact not set, not analyzing binary compatibility [info] Done updating. [info] spark-kvstore: previous-artifact not set, not analyzing binary compatibility [info] spark-tools: previous-artifact not set, not analyzing binary compatibility [info] spark-unsafe: previous-artifact not set, not analyzing binary compatibility [info] spark-streaming-flume-sink: previous-artifact not set, not analyzing binary compatibility [info] spark-network-common: previous-artifact not set, not analyzing binary compatibility [info] spark-network-shuffle: previous-artifact not set, not analyzing binary compatibility [info] spark-network-yarn: previous-artifact not set, not analyzing binary compatibility [info] spark-launcher: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-launcher_2.11:2.3.0 (filtered 1) [info] spark-sketch: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-sketch_2.11:2.3.0 (filtered 1) [info] spark-mllib-local: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-mllib-local_2.11:2.3.0 (filtered 1) [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/SSLOptions.scala:71: constructor SslContextFactory in class SslContextFactory is deprecated: see corresponding Javadoc for more information. [warn] val sslContextFactory = new SslContextFactory() [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2 [warn] param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2 [warn] param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:161: method isRunningLocally in class TaskContext is deprecated: Local execution was removed, so this always returns false [warn] override def isRunningLocally(): Boolean = taskContext.isRunningLocally() [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/scheduler/StageInfo.scala:59: value attemptId in class StageInfo is deprecated: Use attemptNumber instead [warn] def attemptNumber(): Int = attemptId [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/api/r/RBackend.scala:102: method childGroup in class ServerBootstrap is deprecated: see corresponding Javadoc for more information. [warn] if (bootstrap != null && bootstrap.childGroup() != null) { [warn] [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/spark-core_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [info] spark-ganglia-lgpl: previous-artifact not set, not analyzing binary compatibility [info] spark-kubernetes: previous-artifact not set, not analyzing binary compatibility [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] spark-yarn: previous-artifact not set, not analyzing binary compatibility [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:87: method setRole in class Builder is deprecated: see corresponding Javadoc for more information. [warn] fwInfoBuilder.setRole(role) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:224: method setRole in class Builder is deprecated: see corresponding Javadoc for more information. [warn] role.foreach { r => builder.setRole(r) } [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:260: method getRole in class Resource is deprecated: see corresponding Javadoc for more information. [warn] Option(r.getRole), reservation) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:263: method getRole in class Resource is deprecated: see corresponding Javadoc for more information. [warn] Option(r.getRole), reservation) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:521: method setRole in class Builder is deprecated: see corresponding Javadoc for more information. [warn] role.foreach { r => builder.setRole(r) } [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:537: method getRole in class Resource is deprecated: see corresponding Javadoc for more information. [warn] (RoleResourceInfo(resource.getRole, reservation), [warn] [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] spark-mesos: previous-artifact not set, not analyzing binary compatibility [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] spark-catalyst: previous-artifact not set, not analyzing binary compatibility [info] spark-streaming: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-streaming_2.11:2.3.0 (filtered 3) [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:259: object FlumeUtils in package flume is deprecated: Deprecated without replacement [warn] val dstream = FlumeUtils.createStream(jssc, hostname, port, storageLevel, enableDecompression) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:275: object FlumeUtils in package flume is deprecated: Deprecated without replacement [warn] val dstream = FlumeUtils.createPollingStream( [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement [warn] val stream = FlumeUtils.createPollingStream(ssc, host, port) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement [warn] val stream = FlumeUtils.createPollingStream(ssc, host, port) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumeEventCount.scala:59: object FlumeUtils in package flume is deprecated: Deprecated without replacement [warn] val stream = FlumeUtils.createStream(ssc, host, port, StorageLevel.MEMORY_ONLY_SER_2) [warn] [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] spark-streaming-flume: previous-artifact not set, not analyzing binary compatibility [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/DirectKafkaInputDStream.scala:172: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information. [warn] val msgs = c.poll(0) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:100: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information. [warn] consumer.poll(0) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:153: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information. [warn] consumer.poll(0) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaDataConsumer.scala:200: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information. [warn] val p = consumer.poll(timeout) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:140: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information. [warn] private val client = new AmazonKinesisClient(credentials) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:150: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information. [warn] client.setEndpoint(endpointUrl) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:219: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information. [warn] getRecordsRequest.setRequestCredentials(credentials) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:240: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information. [warn] getShardIteratorRequest.setRequestCredentials(credentials) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:606: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead [warn] KinesisUtils.createStream(jssc.ssc, kinesisAppName, streamName, endpointUrl, regionName, [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:613: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead [warn] KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName, [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:616: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead [warn] KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName, [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:107: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information. [warn] val kinesisClient = new AmazonKinesisClient(credentials) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:108: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information. [warn] kinesisClient.setEndpoint(endpointUrl) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:223: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information. [warn] val kinesisClient = new AmazonKinesisClient(new DefaultAWSCredentialsProviderChain()) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:224: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information. [warn] kinesisClient.setEndpoint(endpoint) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/SparkAWSCredentials.scala:76: method withLongLivedCredentialsProvider in class Builder is deprecated: see corresponding Javadoc for more information. [warn] .withLongLivedCredentialsProvider(longLivedCreds.provider) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:58: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information. [warn] val client = new AmazonKinesisClient(KinesisTestUtils.getAWSCredentials()) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:59: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information. [warn] client.setEndpoint(endpointUrl) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:64: constructor AmazonDynamoDBClient in class AmazonDynamoDBClient is deprecated: see corresponding Javadoc for more information. [warn] val dynamoDBClient = new AmazonDynamoDBClient(new DefaultAWSCredentialsProviderChain()) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:65: method setRegion in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information. [warn] dynamoDBClient.setRegion(RegionUtils.getRegion(regionName)) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisReceiver.scala:187: constructor Worker in class Worker is deprecated: see corresponding Javadoc for more information. [warn] worker = new Worker(recordProcessorFactory, kinesisClientLibConfiguration) [warn] [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] spark-streaming-kinesis-asl: previous-artifact not set, not analyzing binary compatibility [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:633: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration [warn] KafkaUtils.createStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder]( [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:647: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] offsetRanges: JList[OffsetRange], [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:648: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration [warn] leaders: JMap[TopicAndPartition, Broker]): JavaRDD[(Array[Byte], Array[Byte])] = { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:657: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] offsetRanges: JList[OffsetRange], [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:658: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration [warn] leaders: JMap[TopicAndPartition, Broker]): JavaRDD[Array[Byte]] = { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:670: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] offsetRanges: JList[OffsetRange], [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:671: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration [warn] leaders: JMap[TopicAndPartition, Broker], [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:673: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration [warn] KafkaUtils.createRDD[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V]( [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:676: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] offsetRanges.toArray(new Array[OffsetRange](offsetRanges.size())), [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:720: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val kc = new KafkaCluster(Map(kafkaParams.asScala.toSeq: _*)) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:721: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration [warn] KafkaUtils.getFromOffsets( [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:725: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration [warn] KafkaUtils.createDirectStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V]( [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration [warn] def createBroker(host: String, port: JInt): Broker = Broker(host, port) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: object Broker in package kafka is deprecated: Update to Kafka 0.10 integration [warn] def createBroker(host: String, port: JInt): Broker = Broker(host, port) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:740: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] def offsetRangesOfKafkaRDD(rdd: RDD[_]): JList[OffsetRange] = { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:89: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration [warn] protected val kc = new KafkaCluster(kafkaParams) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:172: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] OffsetRange(tp.topic, tp.partition, fo, uo.offset) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:217: object KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val leaders = KafkaCluster.checkErrors(kc.findLeaders(topics)) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:222: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] context.sparkContext, kafkaParams, b.map(OffsetRange(_)), leaders, messageHandler) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:58: trait HasOffsetRanges in package kafka is deprecated: Update to Kafka 0.10 integration [warn] ) extends RDD[R](sc, Nil) with Logging with HasOffsetRanges { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val offsetRanges: Array[OffsetRange], [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val offsetRanges: Array[OffsetRange], [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:152: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val kc = new KafkaCluster(kafkaParams) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:268: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] OffsetRange(tp.topic, tp.partition, fo, uo.offset) [warn] [info] spark-streaming-kafka-0-8: previous-artifact not set, not analyzing binary compatibility [info] spark-graphx: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-graphx_2.11:2.3.0 (filtered 3) [info] spark-streaming-kafka-0-8-assembly: previous-artifact not set, not analyzing binary compatibility [info] spark-streaming-kafka-0-10-assembly: previous-artifact not set, not analyzing binary compatibility [info] spark-streaming-kinesis-asl-assembly: previous-artifact not set, not analyzing binary compatibility [info] spark-streaming-flume-assembly: previous-artifact not set, not analyzing binary compatibility [info] spark-streaming-kafka-0-10: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-streaming-kafka-0-10_2.11:2.3.0 (filtered 6) [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:128: value ENABLE_JOB_SUMMARY in object ParquetOutputFormat is deprecated: see corresponding Javadoc for more information. [warn] && conf.get(ParquetOutputFormat.ENABLE_JOB_SUMMARY) == null) { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:360: class ParquetInputSplit in package hadoop is deprecated: see corresponding Javadoc for more information. [warn] new org.apache.parquet.hadoop.ParquetInputSplit( [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:371: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information. [warn] ParquetFileReader.readFooter(sharedConf, filePath, SKIP_ROW_GROUPS).getFileMetaData [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:544: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information. [warn] ParquetFileReader.readFooter( [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs) [warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock()) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs) [warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock()) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs) [warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock()) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs) [warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock()) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs) [warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock()) [warn] [info] spark-avro: previous-artifact not set, not analyzing binary compatibility [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:119: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information. [warn] consumer.poll(0) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:139: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information. [warn] consumer.poll(0) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:187: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information. [warn] consumer.poll(0) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:217: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information. [warn] consumer.poll(0) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:293: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information. [warn] consumer.poll(0) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaDataConsumer.scala:470: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information. [warn] val p = consumer.poll(pollTimeoutMs) [warn] [info] spark-sql-kafka-0-10: previous-artifact not set, not analyzing binary compatibility [info] spark-core: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-core_2.11:2.3.0 (filtered 909) [info] spark-hive: previous-artifact not set, not analyzing binary compatibility [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:138: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0. [warn] object OneHotEncoder extends DefaultParamsReadable[OneHotEncoder] { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:141: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0. [warn] override def load(path: String): OneHotEncoder = super.load(path) [warn] [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:53: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path [warn] if (addedClasspath != "") { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:54: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path [warn] settings.classpath append addedClasspath [warn] [info] spark-repl: previous-artifact not set, not analyzing binary compatibility [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] spark-hive-thriftserver: previous-artifact not set, not analyzing binary compatibility [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/assembly/target/scala-2.11/spark-assembly_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [info] spark-assembly: previous-artifact not set, not analyzing binary compatibility [info] Compiling 1 Scala source to /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/examples/target/scala-2.11/classes... [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/examples/target/scala-2.11/spark-examples_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [info] spark-examples: previous-artifact not set, not analyzing binary compatibility [info] spark-mllib: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-mllib_2.11:2.3.0 (filtered 514) [info] spark-sql: found 0 potential binary incompatibilities while checking against org.apache.spark:spark-sql_2.11:2.3.0 (filtered 294) [success] Total time: 28 s, completed Jan 17, 2021 5:19:00 PM [info] Building Spark assembly (w/Hive 1.2.1) using SBT with these arguments: -Phadoop-2.6 -Pkubernetes -Phive-thriftserver -Pflume -Pkinesis-asl -Pyarn -Pkafka-0-8 -Pspark-ganglia-lgpl -Phive -Pmesos assembly/package Using /usr/lib/jvm/java-8-openjdk-amd64/ as default JAVA_HOME. Note, this will be overridden by -java-home if it is set. [info] Loading project definition from /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/project [info] Set current project to spark-parent (in build file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/) [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/SSLOptions.scala:71: constructor SslContextFactory in class SslContextFactory is deprecated: see corresponding Javadoc for more information. [warn] val sslContextFactory = new SslContextFactory() [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2 [warn] param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2 [warn] param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:161: method isRunningLocally in class TaskContext is deprecated: Local execution was removed, so this always returns false [warn] override def isRunningLocally(): Boolean = taskContext.isRunningLocally() [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/scheduler/StageInfo.scala:59: value attemptId in class StageInfo is deprecated: Use attemptNumber instead [warn] def attemptNumber(): Int = attemptId [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/api/r/RBackend.scala:102: method childGroup in class ServerBootstrap is deprecated: see corresponding Javadoc for more information. [warn] if (bootstrap != null && bootstrap.childGroup() != null) { [warn] [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/spark-core_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:87: method setRole in class Builder is deprecated: see corresponding Javadoc for more information. [warn] fwInfoBuilder.setRole(role) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:224: method setRole in class Builder is deprecated: see corresponding Javadoc for more information. [warn] role.foreach { r => builder.setRole(r) } [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:260: method getRole in class Resource is deprecated: see corresponding Javadoc for more information. [warn] Option(r.getRole), reservation) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:263: method getRole in class Resource is deprecated: see corresponding Javadoc for more information. [warn] Option(r.getRole), reservation) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:521: method setRole in class Builder is deprecated: see corresponding Javadoc for more information. [warn] role.foreach { r => builder.setRole(r) } [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:537: method getRole in class Resource is deprecated: see corresponding Javadoc for more information. [warn] (RoleResourceInfo(resource.getRole, reservation), [warn] [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:128: value ENABLE_JOB_SUMMARY in object ParquetOutputFormat is deprecated: see corresponding Javadoc for more information. [warn] && conf.get(ParquetOutputFormat.ENABLE_JOB_SUMMARY) == null) { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:360: class ParquetInputSplit in package hadoop is deprecated: see corresponding Javadoc for more information. [warn] new org.apache.parquet.hadoop.ParquetInputSplit( [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:371: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information. [warn] ParquetFileReader.readFooter(sharedConf, filePath, SKIP_ROW_GROUPS).getFileMetaData [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:544: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information. [warn] ParquetFileReader.readFooter( [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs) [warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock()) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs) [warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock()) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs) [warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock()) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs) [warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock()) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs) [warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock()) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:138: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0. [warn] object OneHotEncoder extends DefaultParamsReadable[OneHotEncoder] { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:141: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0. [warn] override def load(path: String): OneHotEncoder = super.load(path) [warn] [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:53: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path [warn] if (addedClasspath != "") { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:54: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path [warn] settings.classpath append addedClasspath [warn] [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/assembly/target/scala-2.11/jars/spark-assembly_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [success] Total time: 15 s, completed Jan 17, 2021 5:19:25 PM ======================================================================== Running Java style checks ======================================================================== Checkstyle checks passed. ======================================================================== Running Spark unit tests ======================================================================== [info] Running Spark tests using SBT with these arguments: -Phadoop-2.6 -Pkubernetes -Pflume -Phive-thriftserver -Pyarn -Pkafka-0-8 -Pspark-ganglia-lgpl -Pkinesis-asl -Phive -Pmesos test Using /usr/lib/jvm/java-8-openjdk-amd64/ as default JAVA_HOME. Note, this will be overridden by -java-home if it is set. [info] Loading project definition from /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/project [info] Set current project to spark-parent (in build file:/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/) [info] ScalaTest [info] ScalaTest [info] ScalaTest [info] Run completed in 114 milliseconds. [info] Total number of tests run: 0 [info] Suites: completed 0, aborted 0 [info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0 [info] No tests were executed. [info] Run completed in 120 milliseconds. [info] Total number of tests run: 0 [info] Suites: completed 0, aborted 0 [info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0 [info] No tests were executed. [info] Run completed in 168 milliseconds. [info] Total number of tests run: 0 [info] Suites: completed 0, aborted 0 [info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0 [info] No tests were executed. [info] ScalaTest [info] Run completed in 22 milliseconds. [info] Total number of tests run: 0 [info] Suites: completed 0, aborted 0 [info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0 [info] No tests were executed. [info] BitArraySuite: [info] SparkSinkSuite: [info] Test run started [info] - error case when create BitArray (14 milliseconds) [info] - bitSize (3 milliseconds) [info] - set (2 milliseconds) [info] - normal operation (10 milliseconds) [info] - merge (11 milliseconds) [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.copyIndexDescendingWithStart started [info] BloomFilterSuite: [info] - accuracy - Byte (8 milliseconds) [info] - mergeInPlace - Byte (5 milliseconds) [info] - accuracy - Short (6 milliseconds) [info] - mergeInPlace - Short (6 milliseconds) [info] UTF8StringPropertyCheckSuite: [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.refIndexWithStart started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.numericIndexDescendingWithStart started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.copyIndexDescending started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.childIndexWithStart started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.childIndexWithLast started [info] - accuracy - Int (40 milliseconds) [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.childIndexWithSkip started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.childIndexWithMax started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.naturalIndexDescending started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.numericIndexDescendingWithLast started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.childIndexDescending started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.copyIndexDescendingWithLast started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.refIndex started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.numericIndexWithLast started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.copyIndexWithStart started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.childIndexDescendingWithStart started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.refIndexWithLast started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.refIndexWithSkip started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.numericIndexDescending started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.testRefWithIntNaturalKey started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.refIndexDescending started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.refIndexDescendingWithStart started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.naturalIndexWithMax started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.naturalIndex started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.copyIndexWithLast started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.copyIndexWithSkip started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.copyIndexWithMax started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.childIndexDescendingWithLast started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.refIndexDescendingWithLast started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.naturalIndexDescendingWithStart started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.childIndex started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.naturalIndexWithLast started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.naturalIndexWithSkip started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.naturalIndexWithStart started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.copyIndex started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.naturalIndexDescendingWithLast started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.numericIndexWithStart started [info] Test org.apache.spark.util.kvstore.InMemoryIteratorSuite.numericIndex started [info] Test run finished: 0 failed, 0 ignored, 38 total, 0.174s [info] Test run started [info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testDuplicateIndex started [info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testEmptyIndexName started [info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testIndexAnnotation started [info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testNumEncoding started [info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testIllegalIndexMethod started [info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testKeyClashes started [info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testArrayIndices started [info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testNoNaturalIndex2 started [info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testIllegalIndexName started [info] Test org.apache.spark.util.kvstore.LevelDBTypeInfoSuite.testNoNaturalIndex started [info] Test run finished: 0 failed, 0 ignored, 10 total, 0.014s [info] Test run started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.copyIndexDescendingWithStart started [info] - toString (93 milliseconds) [info] - numChars (15 milliseconds) [info] - startsWith (14 milliseconds) [info] - mergeInPlace - Int (134 milliseconds) [info] - endsWith (9 milliseconds) [info] - toUpperCase (6 milliseconds) [info] - toLowerCase (4 milliseconds) [info] - compare (8 milliseconds) [info] - accuracy - Long (46 milliseconds) [info] - substring (50 milliseconds) [info] - contains (17 milliseconds) [info] - trim, trimLeft, trimRight (13 milliseconds) [info] - reverse (4 milliseconds) [info] - indexOf (18 milliseconds) [info] - repeat (8 milliseconds) [info] - lpad, rpad (4 milliseconds) [info] - mergeInPlace - Long (101 milliseconds) [info] - concat (57 milliseconds) [info] - concatWs (44 milliseconds) [info] - split !!! IGNORED !!! [info] - levenshteinDistance (7 milliseconds) [info] - hashCode (2 milliseconds) [info] - equals (1 millisecond) [info] Test run started [info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.addTest started [info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.fromStringTest started [info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.equalsTest started [info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.fromYearMonthStringTest started [info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.toStringTest started [info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.subtractTest started [info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.fromCaseInsensitiveStringTest started [info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.fromSingleUnitStringTest started [info] Test org.apache.spark.unsafe.types.CalendarIntervalSuite.fromDayTimeStringTest started [info] Test run finished: 0 failed, 0 ignored, 9 total, 0.013s [info] Test run started [info] Test org.apache.spark.unsafe.array.LongArraySuite.basicTest started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.001s [info] Test run started [info] Test org.apache.spark.unsafe.PlatformUtilSuite.freeingOnHeapMemoryBlockResetsBaseObjectAndOffset started [info] Test org.apache.spark.unsafe.PlatformUtilSuite.overlappingCopyMemory started [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/SSLOptions.scala:71: constructor SslContextFactory in class SslContextFactory is deprecated: see corresponding Javadoc for more information. [warn] val sslContextFactory = new SslContextFactory() [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2 [warn] param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/util/AccumulatorV2.scala:492: trait AccumulableParam in package spark is deprecated: use AccumulatorV2 [warn] param: org.apache.spark.AccumulableParam[R, T]) extends AccumulatorV2[T, R] { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/BarrierTaskContext.scala:161: method isRunningLocally in class TaskContext is deprecated: Local execution was removed, so this always returns false [warn] override def isRunningLocally(): Boolean = taskContext.isRunningLocally() [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/scheduler/StageInfo.scala:59: value attemptId in class StageInfo is deprecated: Use attemptNumber instead [warn] def attemptNumber(): Int = attemptId [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/src/main/scala/org/apache/spark/api/r/RBackend.scala:102: method childGroup in class ServerBootstrap is deprecated: see corresponding Javadoc for more information. [warn] if (bootstrap != null && bootstrap.childGroup() != null) { [warn] [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.refIndexWithStart started [info] Test org.apache.spark.unsafe.PlatformUtilSuite.memoryDebugFillEnabledInTest started [info] Test org.apache.spark.unsafe.PlatformUtilSuite.offHeapMemoryAllocatorThrowsAssertionErrorOnDoubleFree started [info] Test org.apache.spark.unsafe.PlatformUtilSuite.heapMemoryReuse started [info] Test org.apache.spark.unsafe.PlatformUtilSuite.onHeapMemoryAllocatorPoolingReUsesLongArrays started [info] Test org.apache.spark.unsafe.PlatformUtilSuite.onHeapMemoryAllocatorThrowsAssertionErrorOnDoubleFree started [info] Test org.apache.spark.unsafe.PlatformUtilSuite.freeingOffHeapMemoryBlockResetsOffset started [info] Test run finished: 0 failed, 0 ignored, 8 total, 0.06s [info] Test run started [info] Test org.apache.spark.unsafe.hash.Murmur3_x86_32Suite.testKnownLongInputs started [info] Test org.apache.spark.unsafe.hash.Murmur3_x86_32Suite.testKnownIntegerInputs started [info] Test org.apache.spark.unsafe.hash.Murmur3_x86_32Suite.randomizedStressTest started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.numericIndexDescendingWithStart started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.copyIndexDescending started [info] Test org.apache.spark.unsafe.hash.Murmur3_x86_32Suite.testKnownBytesInputs started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.childIndexWithStart started [info] Test org.apache.spark.unsafe.hash.Murmur3_x86_32Suite.randomizedStressTestPaddedStrings started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.childIndexWithLast started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.childIndexWithSkip started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.childIndexWithMax started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.naturalIndexDescending started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.numericIndexDescendingWithLast started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.childIndexDescending started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.copyIndexDescendingWithLast started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.refIndex started [info] Test org.apache.spark.unsafe.hash.Murmur3_x86_32Suite.randomizedStressTestBytes started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.numericIndexWithLast started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.copyIndexWithStart started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.childIndexDescendingWithStart started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.refIndexWithLast started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.refIndexWithSkip started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.numericIndexDescending started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.testRefWithIntNaturalKey started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.refIndexDescending started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.refIndexDescendingWithStart started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.naturalIndexWithMax started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.naturalIndex started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.copyIndexWithLast started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.copyIndexWithSkip started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.copyIndexWithMax started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.childIndexDescendingWithLast started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.refIndexDescendingWithLast started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.naturalIndexDescendingWithStart started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.childIndex started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.naturalIndexWithLast started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.naturalIndexWithSkip started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.naturalIndexWithStart started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.copyIndex started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.naturalIndexDescendingWithLast started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.numericIndexWithStart started [info] Test org.apache.spark.util.kvstore.LevelDBIteratorSuite.numericIndex started [info] Test run finished: 0 failed, 0 ignored, 38 total, 0.756s [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/core/target/scala-2.11/spark-core_2.11-2.4.8-SNAPSHOT.jar ... [info] Test run started [info] Test org.apache.spark.util.kvstore.ArrayWrappersSuite.testGenericArrayKey started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.001s [info] Test run started [info] Test run finished: 0 failed, 0 ignored, 6 total, 0.304s [info] Test org.apache.spark.util.kvstore.LevelDBBenchmark ignored [info] Test run finished: 0 failed, 1 ignored, 0 total, 0.0s [info] Test run started [info] Test org.apache.spark.util.kvstore.InMemoryStoreSuite.testBasicIteration started [info] Test org.apache.spark.util.kvstore.InMemoryStoreSuite.testObjectWriteReadDelete started [info] Test org.apache.spark.util.kvstore.InMemoryStoreSuite.testMultipleObjectWriteReadDelete started [info] Test org.apache.spark.util.kvstore.InMemoryStoreSuite.testMetadata started [info] Test org.apache.spark.util.kvstore.InMemoryStoreSuite.testArrayIndices started [info] Test org.apache.spark.util.kvstore.InMemoryStoreSuite.testUpdate started [info] Test org.apache.spark.util.kvstore.InMemoryStoreSuite.testRemoveAll started [info] Test run finished: 0 failed, 0 ignored, 7 total, 0.015s [info] Test run started [info] Test org.apache.spark.util.kvstore.LevelDBSuite.testMultipleTypesWriteReadDelete started [info] Test run started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.titleCase started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.concatTest started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.soundex started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.basicTest started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.writeToOutputStreamUnderflow started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.testToShort started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.startsWith started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.compareTo started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.levenshteinDistance started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.writeToOutputStreamOverflow started [info] Test org.apache.spark.util.kvstore.LevelDBSuite.testObjectWriteReadDelete started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.writeToOutputStreamIntArray started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.upperAndLower started [info] Test org.apache.spark.util.kvstore.LevelDBSuite.testSkip started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.testToInt started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.createBlankString started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.prefix started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.concatWsTest started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.repeat started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.contains started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.skipWrongFirstByte started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.emptyStringTest started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.writeToOutputStreamSlice started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.trimBothWithTrimString started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.substringSQL started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.substring_index started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.pad started [info] Test org.apache.spark.util.kvstore.LevelDBSuite.testMultipleObjectWriteReadDelete started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.split started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.trims started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.trimRightWithTrimString started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.findInSet started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.substring started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.translate started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.reverse started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.trimLeftWithTrimString started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.endsWith started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.testToByte started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.testToLong started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.writeToOutputStream started [info] Test org.apache.spark.unsafe.types.UTF8StringSuite.indexOf started [info] Test run finished: 0 failed, 0 ignored, 38 total, 0.045s [info] Test org.apache.spark.util.kvstore.LevelDBSuite.testReopenAndVersionCheckDb started [info] Test org.apache.spark.util.kvstore.LevelDBSuite.testMetadata started [info] Test org.apache.spark.util.kvstore.LevelDBSuite.testUpdate started [info] Test org.apache.spark.util.kvstore.LevelDBSuite.testRemoveAll started [info] Test org.apache.spark.util.kvstore.LevelDBSuite.testNegativeIndexValues started [info] Test run finished: 0 failed, 0 ignored, 9 total, 0.319s [info] - Success with ack (1 second, 691 milliseconds) [info] Test run started [info] Test org.apache.spark.launcher.InProcessLauncherSuite.testKill started [info] Test org.apache.spark.launcher.InProcessLauncherSuite.testLauncher started [info] Test org.apache.spark.launcher.InProcessLauncherSuite.testErrorPropagation started [info] Test run finished: 0 failed, 0 ignored, 3 total, 0.158s [info] Test run started [info] Test org.apache.spark.launcher.SparkSubmitOptionParserSuite.testMissingArg started [info] TestingUtilsSuite: [info] - Comparing doubles using relative error. (39 milliseconds) [info] - Comparing doubles using absolute error. (5 milliseconds) [info] Test org.apache.spark.launcher.SparkSubmitOptionParserSuite.testAllOptions started [info] - Comparing vectors using relative error. (17 milliseconds) [info] - Comparing vectors using absolute error. (7 milliseconds) [info] Test org.apache.spark.launcher.SparkSubmitOptionParserSuite.testEqualSeparatedOption started [info] Test org.apache.spark.launcher.SparkSubmitOptionParserSuite.testExtraOptions started [info] Test run finished: 0 failed, 0 ignored, 4 total, 0.162s [info] Test run started [info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testCliParser started [info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testPySparkLauncher started [info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testAlternateSyntaxParsing started [info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testExamplesRunner started [info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testSparkRShell started [info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testMissingAppResource started [info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testShellCliParser started [info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testClusterCmdBuilder started [info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testDriverCmdBuilder started [info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testExamplesRunnerNoMainClass started [info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testCliKillAndStatus started [info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testExamplesRunnerNoArg started [info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testPySparkFallback started [info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testExamplesRunnerWithMasterNoMainClass started [info] Test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testCliHelpAndNoArg started [info] Test run finished: 0 failed, 0 ignored, 15 total, 0.066s [info] Test run started [info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testNoRedirectToLog started [info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testProcMonitorWithOutputRedirection started [info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testRedirectOutputToLog started [info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testRedirectsSimple started [info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testRedirectErrorToLog started [info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testProcMonitorWithLogRedirection started [info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testFailedChildProc started [info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testRedirectErrorTwiceFails started [info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testBadLogRedirect started [info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testRedirectLastWins started [info] Test org.apache.spark.launcher.ChildProcAppHandleSuite.testRedirectToLog started [info] Test run finished: 0 failed, 0 ignored, 11 total, 0.128s [info] Test run started [info] Test org.apache.spark.launcher.CommandBuilderUtilsSuite.testValidOptionStrings started [info] Test org.apache.spark.launcher.CommandBuilderUtilsSuite.testJavaMajorVersion started [info] Test org.apache.spark.launcher.CommandBuilderUtilsSuite.testPythonArgQuoting started [info] Test org.apache.spark.launcher.CommandBuilderUtilsSuite.testWindowsBatchQuoting started [info] Test org.apache.spark.launcher.CommandBuilderUtilsSuite.testInvalidOptionStrings started [info] Test run finished: 0 failed, 0 ignored, 5 total, 0.004s [info] Test run started [info] Test org.apache.spark.launcher.LauncherServerSuite.testTimeout started [info] Test org.apache.spark.launcher.LauncherServerSuite.testStreamFiltering started [info] Done packaging. [info] - Comparing Matrices using absolute error. (289 milliseconds) [info] - Comparing Matrices using relative error. (10 milliseconds) [info] Test org.apache.spark.launcher.LauncherServerSuite.testSparkSubmitVmShutsDown started [info] Test org.apache.spark.launcher.LauncherServerSuite.testLauncherServerReuse started [info] UtilsSuite: [info] - EPSILON (3 milliseconds) [info] Test org.apache.spark.launcher.LauncherServerSuite.testAppHandleDisconnect started [info] Test org.apache.spark.launcher.LauncherServerSuite.testCommunication started [info] MatricesSuite: [info] - dense matrix construction (0 milliseconds) [info] - dense matrix construction with wrong dimension (1 millisecond) [info] Test run finished: 0 failed, 0 ignored, 6 total, 0.14s [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] - sparse matrix construction (195 milliseconds) [info] - sparse matrix construction with wrong number of elements (2 milliseconds) [info] - index in matrices incorrect input (3 milliseconds) [info] - equals (16 milliseconds) [info] - matrix copies are deep copies (0 milliseconds) [info] - matrix indexing and updating (1 millisecond) [info] - dense to dense (2 milliseconds) [info] - dense to sparse (2 milliseconds) [info] - sparse to sparse (4 milliseconds) [info] - sparse to dense (2 milliseconds) [info] - compressed dense (4 milliseconds) [info] - compressed sparse (2 milliseconds) [info] - map, update (2 milliseconds) [info] - transpose (1 millisecond) [info] - foreachActive (1 millisecond) [info] - horzcat, vertcat, eye, speye (13 milliseconds) [info] - zeros (1 millisecond) [info] - ones (2 milliseconds) [info] - eye (1 millisecond) [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:87: method setRole in class Builder is deprecated: see corresponding Javadoc for more information. [warn] fwInfoBuilder.setRole(role) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:224: method setRole in class Builder is deprecated: see corresponding Javadoc for more information. [warn] role.foreach { r => builder.setRole(r) } [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:260: method getRole in class Resource is deprecated: see corresponding Javadoc for more information. [warn] Option(r.getRole), reservation) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:263: method getRole in class Resource is deprecated: see corresponding Javadoc for more information. [warn] Option(r.getRole), reservation) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:521: method setRole in class Builder is deprecated: see corresponding Javadoc for more information. [warn] role.foreach { r => builder.setRole(r) } [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala:537: method getRole in class Resource is deprecated: see corresponding Javadoc for more information. [warn] (RoleResourceInfo(resource.getRole, reservation), [warn] [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] - Failure with nack (1 second, 147 milliseconds) [info] - rand (142 milliseconds) [info] - randn (2 milliseconds) [info] - diag (1 millisecond) [info] - sprand (9 milliseconds) [info] - sprandn (2 milliseconds) [info] - toString (13 milliseconds) [info] - numNonzeros and numActives (1 millisecond) [info] - fromBreeze with sparse matrix (15 milliseconds) Jan 17, 2021 5:19:52 PM com.github.fommil.netlib.BLAS <clinit> WARNING: Failed to load implementation from: com.github.fommil.netlib.NativeSystemBLAS Jan 17, 2021 5:19:52 PM com.github.fommil.netlib.BLAS <clinit> WARNING: Failed to load implementation from: com.github.fommil.netlib.NativeRefBLAS [info] - row/col iterator (45 milliseconds) [info] BreezeMatrixConversionSuite: [info] - dense matrix to breeze (0 milliseconds) [info] - dense breeze matrix to matrix (1 millisecond) [info] - sparse matrix to breeze (1 millisecond) [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] - sparse breeze matrix to sparse matrix (1 millisecond) [info] MultivariateGaussianSuite: [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:259: object FlumeUtils in package flume is deprecated: Deprecated without replacement [warn] val dstream = FlumeUtils.createStream(jssc, hostname, port, storageLevel, enableDecompression) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala:275: object FlumeUtils in package flume is deprecated: Deprecated without replacement [warn] val dstream = FlumeUtils.createPollingStream( [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement [warn] val stream = FlumeUtils.createPollingStream(ssc, host, port) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumePollingEventCount.scala:56: object FlumeUtils in package flume is deprecated: Deprecated without replacement [warn] val stream = FlumeUtils.createPollingStream(ssc, host, port) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/flume/src/main/scala/org/apache/spark/examples/FlumeEventCount.scala:59: object FlumeUtils in package flume is deprecated: Deprecated without replacement [warn] val stream = FlumeUtils.createStream(ssc, host, port, StorageLevel.MEMORY_ONLY_SER_2) [warn] [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/DirectKafkaInputDStream.scala:172: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information. [warn] val msgs = c.poll(0) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:100: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information. [warn] consumer.poll(0) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/ConsumerStrategy.scala:153: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information. [warn] consumer.poll(0) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaDataConsumer.scala:200: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information. [warn] val p = consumer.poll(timeout) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:633: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration [warn] KafkaUtils.createStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder]( [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:647: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] offsetRanges: JList[OffsetRange], [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:648: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration [warn] leaders: JMap[TopicAndPartition, Broker]): JavaRDD[(Array[Byte], Array[Byte])] = { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:657: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] offsetRanges: JList[OffsetRange], [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:658: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration [warn] leaders: JMap[TopicAndPartition, Broker]): JavaRDD[Array[Byte]] = { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:670: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] offsetRanges: JList[OffsetRange], [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:671: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration [warn] leaders: JMap[TopicAndPartition, Broker], [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:673: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration [warn] KafkaUtils.createRDD[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V]( [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:676: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] offsetRanges.toArray(new Array[OffsetRange](offsetRanges.size())), [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:720: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val kc = new KafkaCluster(Map(kafkaParams.asScala.toSeq: _*)) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:721: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration [warn] KafkaUtils.getFromOffsets( [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:725: object KafkaUtils in package kafka is deprecated: Update to Kafka 0.10 integration [warn] KafkaUtils.createDirectStream[Array[Byte], Array[Byte], DefaultDecoder, DefaultDecoder, V]( [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:733: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] ): OffsetRange = OffsetRange.create(topic, partition, fromOffset, untilOffset) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: class Broker in package kafka is deprecated: Update to Kafka 0.10 integration [warn] def createBroker(host: String, port: JInt): Broker = Broker(host, port) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:738: object Broker in package kafka is deprecated: Update to Kafka 0.10 integration [warn] def createBroker(host: String, port: JInt): Broker = Broker(host, port) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaUtils.scala:740: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] def offsetRangesOfKafkaRDD(rdd: RDD[_]): JList[OffsetRange] = { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:89: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration [warn] protected val kc = new KafkaCluster(kafkaParams) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:172: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] OffsetRange(tp.topic, tp.partition, fo, uo.offset) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:217: object KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val leaders = KafkaCluster.checkErrors(kc.findLeaders(topics)) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/DirectKafkaInputDStream.scala:222: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] context.sparkContext, kafkaParams, b.map(OffsetRange(_)), leaders, messageHandler) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:58: trait HasOffsetRanges in package kafka is deprecated: Update to Kafka 0.10 integration [warn] ) extends RDD[R](sc, Nil) with Logging with HasOffsetRanges { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val offsetRanges: Array[OffsetRange], [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:55: class OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val offsetRanges: Array[OffsetRange], [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:152: class KafkaCluster in package kafka is deprecated: Update to Kafka 0.10 integration [warn] val kc = new KafkaCluster(kafkaParams) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-8/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDD.scala:268: object OffsetRange in package kafka is deprecated: Update to Kafka 0.10 integration [warn] OffsetRange(tp.topic, tp.partition, fo, uo.offset) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:140: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information. [warn] private val client = new AmazonKinesisClient(credentials) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:150: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information. [warn] client.setEndpoint(endpointUrl) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:219: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information. [warn] getRecordsRequest.setRequestCredentials(credentials) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala:240: method setRequestCredentials in class AmazonWebServiceRequest is deprecated: see corresponding Javadoc for more information. [warn] getShardIteratorRequest.setRequestCredentials(credentials) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:606: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead [warn] KinesisUtils.createStream(jssc.ssc, kinesisAppName, streamName, endpointUrl, regionName, [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:613: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead [warn] KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName, [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisUtils.scala:616: method createStream in object KinesisUtils is deprecated: Use KinesisInputDStream.builder instead [warn] KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName, [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:107: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information. [warn] val kinesisClient = new AmazonKinesisClient(credentials) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:108: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information. [warn] kinesisClient.setEndpoint(endpointUrl) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:223: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information. [warn] val kinesisClient = new AmazonKinesisClient(new DefaultAWSCredentialsProviderChain()) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/examples/streaming/KinesisWordCountASL.scala:224: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information. [warn] kinesisClient.setEndpoint(endpoint) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/SparkAWSCredentials.scala:76: method withLongLivedCredentialsProvider in class Builder is deprecated: see corresponding Javadoc for more information. [warn] .withLongLivedCredentialsProvider(longLivedCreds.provider) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:58: constructor AmazonKinesisClient in class AmazonKinesisClient is deprecated: see corresponding Javadoc for more information. [warn] val client = new AmazonKinesisClient(KinesisTestUtils.getAWSCredentials()) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:59: method setEndpoint in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information. [warn] client.setEndpoint(endpointUrl) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:64: constructor AmazonDynamoDBClient in class AmazonDynamoDBClient is deprecated: see corresponding Javadoc for more information. [warn] val dynamoDBClient = new AmazonDynamoDBClient(new DefaultAWSCredentialsProviderChain()) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisTestUtils.scala:65: method setRegion in class AmazonWebServiceClient is deprecated: see corresponding Javadoc for more information. [warn] dynamoDBClient.setRegion(RegionUtils.getRegion(regionName)) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisReceiver.scala:187: constructor Worker in class Worker is deprecated: see corresponding Javadoc for more information. [warn] worker = new Worker(recordProcessorFactory, kinesisClientLibConfiguration) [warn] [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] - accuracy - String (2 seconds, 955 milliseconds) Jan 17, 2021 5:19:53 PM com.github.fommil.netlib.LAPACK <clinit> WARNING: Failed to load implementation from: com.github.fommil.netlib.NativeSystemLAPACK Jan 17, 2021 5:19:53 PM com.github.fommil.netlib.LAPACK <clinit> WARNING: Failed to load implementation from: com.github.fommil.netlib.NativeRefLAPACK [info] ScalaTest [info] Run completed in 18 milliseconds. [info] Total number of tests run: 0 [info] Suites: completed 0, aborted 0 [info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0 [info] No tests were executed. [info] Test run started [info] Test org.apache.spark.network.crypto.AuthEngineSuite.testBadChallenge started [info] ScalaTest [info] Run completed in 14 milliseconds. [info] Total number of tests run: 0 [info] Suites: completed 0, aborted 0 [info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0 [info] No tests were executed. [info] ScalaTest [info] Run completed in 13 milliseconds. [info] Total number of tests run: 0 [info] Suites: completed 0, aborted 0 [info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0 [info] No tests were executed. [info] ScalaTest [info] Run completed in 17 milliseconds. [info] Total number of tests run: 0 [info] Suites: completed 0, aborted 0 [info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0 [info] No tests were executed. [info] - univariate (427 milliseconds) [info] - multivariate (8 milliseconds) [info] - multivariate degenerate (0 milliseconds) [info] - SPARK-11302 (5 milliseconds) [info] BreezeVectorConversionSuite: [info] - dense to breeze (1 millisecond) [info] Test org.apache.spark.network.crypto.AuthEngineSuite.testWrongAppId started [info] Test org.apache.spark.network.crypto.AuthEngineSuite.testWrongNonce started [info] Test org.apache.spark.network.crypto.AuthEngineSuite.testMismatchedSecret started [info] - sparse to breeze (110 milliseconds) [info] - dense breeze to vector (1 millisecond) [info] - sparse breeze to vector (0 milliseconds) [info] - sparse breeze with partially-used arrays to vector (0 milliseconds) [info] Test org.apache.spark.network.crypto.AuthEngineSuite.testEncryptedMessage started [info] VectorsSuite: [info] - dense vector construction with varargs (0 milliseconds) [info] - dense vector construction from a double array (0 milliseconds) [info] - sparse vector construction (1 millisecond) [info] - sparse vector construction with unordered elements (2 milliseconds) [info] - sparse vector construction with mismatched indices/values array (1 millisecond) [info] - sparse vector construction with too many indices vs size (1 millisecond) [info] - sparse vector construction with negative indices (0 milliseconds) [info] - dense to array (0 milliseconds) [info] - dense argmax (1 millisecond) [info] - sparse to array (0 milliseconds) [info] - sparse argmax (0 milliseconds) [info] - vector equals (2 milliseconds) [info] - vectors equals with explicit 0 (1 millisecond) [info] - indexing dense vectors (1 millisecond) [info] - indexing sparse vectors (0 milliseconds) [info] - zeros (0 milliseconds) [info] - Vector.copy (1 millisecond) [info] - fromBreeze (1 millisecond) [info] - sqdist (49 milliseconds) [info] - foreachActive (3 milliseconds) [info] Test org.apache.spark.network.crypto.AuthEngineSuite.testEncryptedMessageWhenTransferringZeroBytes started [info] - vector p-norm (5 milliseconds) [info] - Vector numActive and numNonzeros (2 milliseconds) [info] - Vector toSparse and toDense (1 millisecond) [info] - Vector.compressed (1 millisecond) [info] - SparseVector.slice (1 millisecond) [info] - sparse vector only support non-negative length (1 millisecond) [info] BLASSuite: [info] - copy (4 milliseconds) [info] - scal (0 milliseconds) [info] - axpy (1 millisecond) [info] - dot (1 millisecond) [info] - spr (2 milliseconds) [info] - syr (4 milliseconds) [info] - gemm (3 milliseconds) [info] - gemv (3 milliseconds) [info] - spmv (2 milliseconds) [info] Test org.apache.spark.network.crypto.AuthEngineSuite.testAuthEngine started [info] Test org.apache.spark.network.crypto.AuthEngineSuite.testBadKeySize started [info] Test run finished: 0 failed, 0 ignored, 8 total, 0.442s [info] Test run started [info] Test org.apache.spark.network.server.OneForOneStreamManagerSuite.streamStatesAreFreedWhenConnectionIsClosedEvenIfBufferIteratorThrowsException started [info] Test org.apache.spark.network.server.OneForOneStreamManagerSuite.managedBuffersAreFeedWhenConnectionIsClosed started [info] Test run finished: 0 failed, 0 ignored, 2 total, 0.062s [info] Test run started [info] Test org.apache.spark.network.RequestTimeoutIntegrationSuite.furtherRequestsDelay started [info] - Failure with timeout (1 second, 120 milliseconds) [info] Test run started [info] Test org.apache.spark.network.shuffle.ExternalShuffleIntegrationSuite.testFetchUnregisteredExecutor started [info] Test org.apache.spark.network.shuffle.ExternalShuffleIntegrationSuite.testFetchWrongExecutor started [info] - Multiple consumers (1 second, 511 milliseconds) [info] Test org.apache.spark.network.shuffle.ExternalShuffleIntegrationSuite.testFetchNoServer started [info] - mergeInPlace - String (2 seconds, 217 milliseconds) [info] - incompatible merge (2 milliseconds) [info] CountMinSketchSuite: [info] Test org.apache.spark.network.shuffle.ExternalShuffleIntegrationSuite.testRegisterInvalidExecutor started [info] Test org.apache.spark.network.shuffle.ExternalShuffleIntegrationSuite.testFetchThreeSort started [info] - accuracy - Byte (261 milliseconds) [info] Test org.apache.spark.network.shuffle.ExternalShuffleIntegrationSuite.testFetchWrongBlockId started [info] Test org.apache.spark.network.shuffle.ExternalShuffleIntegrationSuite.testFetchNonexistent started [info] - mergeInPlace - Byte (241 milliseconds) [info] Test org.apache.spark.network.shuffle.ExternalShuffleIntegrationSuite.testFetchOneSort started [info] Test run finished: 0 failed, 0 ignored, 8 total, 1.703s [info] Test run started [info] Test org.apache.spark.network.shuffle.RetryingBlockFetcherSuite.testRetryAndUnrecoverable started [info] Test org.apache.spark.network.shuffle.RetryingBlockFetcherSuite.testSingleIOExceptionOnFirst started [info] Test org.apache.spark.network.shuffle.RetryingBlockFetcherSuite.testUnrecoverableFailure started [info] Test org.apache.spark.network.shuffle.RetryingBlockFetcherSuite.testSingleIOExceptionOnSecond started [info] Test org.apache.spark.network.shuffle.RetryingBlockFetcherSuite.testThreeIOExceptions started [info] Test org.apache.spark.network.shuffle.RetryingBlockFetcherSuite.testNoFailures started [info] Test org.apache.spark.network.shuffle.RetryingBlockFetcherSuite.testTwoIOExceptions started [info] Test run finished: 0 failed, 0 ignored, 7 total, 0.205s [info] Test run started [info] Test org.apache.spark.network.shuffle.ExternalShuffleSecuritySuite.testBadSecret started [info] Test org.apache.spark.network.shuffle.ExternalShuffleSecuritySuite.testBadAppId started [info] - accuracy - Short (586 milliseconds) [info] Test org.apache.spark.network.shuffle.ExternalShuffleSecuritySuite.testValid started [info] Test org.apache.spark.network.shuffle.ExternalShuffleSecuritySuite.testEncryption started [info] - Multiple consumers with some failures (1 second, 400 milliseconds) [info] - mergeInPlace - Short (229 milliseconds) [info] Test run finished: 0 failed, 0 ignored, 4 total, 0.435s [info] Test run started [info] Test org.apache.spark.network.sasl.SaslIntegrationSuite.testGoodClient started [info] ScalaTest [info] Run completed in 8 seconds, 65 milliseconds. [info] Total number of tests run: 19 [info] Suites: completed 1, aborted 0 [info] Tests: succeeded 19, failed 0, canceled 0, ignored 1, pending 0 [info] All tests passed. [info] Passed: Total 81, Failed 0, Errors 0, Passed 81, Ignored 1 [info] Test org.apache.spark.network.sasl.SaslIntegrationSuite.testNoSaslClient started [info] Test org.apache.spark.network.sasl.SaslIntegrationSuite.testNoSaslServer started [info] Test org.apache.spark.network.sasl.SaslIntegrationSuite.testBadClient started [info] Test run finished: 0 failed, 0 ignored, 4 total, 0.384s [info] Test run started [info] Test org.apache.spark.network.sasl.ShuffleSecretManagerSuite.testMultipleRegisters started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.002s [info] - accuracy - Int (463 milliseconds) [info] Test run started [info] Test org.apache.spark.network.shuffle.NonShuffleFilesCleanupSuite.cleanupUsesExecutorWithShuffleFiles started [info] Test org.apache.spark.network.shuffle.NonShuffleFilesCleanupSuite.cleanupOnlyRegisteredExecutorWithShuffleFiles started [info] Test org.apache.spark.network.shuffle.NonShuffleFilesCleanupSuite.cleanupOnRemovedExecutorWithShuffleFiles started [info] Test org.apache.spark.network.shuffle.NonShuffleFilesCleanupSuite.cleanupOnlyRemovedExecutorWithoutShuffleFiles started [info] Test org.apache.spark.network.shuffle.NonShuffleFilesCleanupSuite.cleanupOnRemovedExecutorWithoutShuffleFiles started [info] Test org.apache.spark.network.shuffle.NonShuffleFilesCleanupSuite.cleanupOnlyRemovedExecutorWithShuffleFiles started [info] Test org.apache.spark.network.shuffle.NonShuffleFilesCleanupSuite.cleanupOnlyRegisteredExecutorWithoutShuffleFiles started [info] Test org.apache.spark.network.shuffle.NonShuffleFilesCleanupSuite.cleanupUsesExecutorWithoutShuffleFiles started [info] Test run finished: 0 failed, 0 ignored, 8 total, 0.1s [info] Test run started [info] Test org.apache.spark.network.shuffle.AppIsolationSuite.testSaslAppIsolation started [info] - mergeInPlace - Int (199 milliseconds) [info] Test org.apache.spark.network.shuffle.AppIsolationSuite.testAuthEngineAppIsolation started [info] Test run finished: 0 failed, 0 ignored, 2 total, 0.378s [info] Test run started [info] Test org.apache.spark.network.shuffle.ExternalShuffleBlockResolverSuite.testNormalizeAndInternPathname started [info] Test org.apache.spark.network.shuffle.ExternalShuffleBlockResolverSuite.testSortShuffleBlocks started [info] Test org.apache.spark.network.shuffle.ExternalShuffleBlockResolverSuite.testBadRequests started [info] Test org.apache.spark.network.shuffle.ExternalShuffleBlockResolverSuite.jsonSerializationOfExecutorRegistration started [info] Test run finished: 0 failed, 0 ignored, 4 total, 0.214s [info] Test run started [info] Test org.apache.spark.network.shuffle.ExternalShuffleBlockHandlerSuite.testOpenShuffleBlocks started [info] Test org.apache.spark.network.shuffle.ExternalShuffleBlockHandlerSuite.testRegisterExecutor started [info] Test org.apache.spark.network.shuffle.ExternalShuffleBlockHandlerSuite.testBadMessages started [info] Test run finished: 0 failed, 0 ignored, 3 total, 0.023s [info] Test run started [info] Test org.apache.spark.network.shuffle.OneForOneBlockFetcherSuite.testEmptyBlockFetch started [info] Test org.apache.spark.network.shuffle.OneForOneBlockFetcherSuite.testFailure started [info] - accuracy - Long (535 milliseconds) [info] Test org.apache.spark.network.shuffle.OneForOneBlockFetcherSuite.testFailureAndSuccess started [info] Test org.apache.spark.network.shuffle.OneForOneBlockFetcherSuite.testFetchThree started [info] Test org.apache.spark.network.shuffle.OneForOneBlockFetcherSuite.testFetchOne started [info] Test run finished: 0 failed, 0 ignored, 5 total, 0.009s [info] Test run started [info] Test org.apache.spark.network.shuffle.BlockTransferMessagesSuite.serializeOpenShuffleBlocks started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.001s [info] Test run started [info] Test org.apache.spark.network.shuffle.ExternalShuffleCleanupSuite.cleanupOnlyRemovedApp started [info] JdbcRDDSuite: [info] - mergeInPlace - Long (225 milliseconds) [info] Test org.apache.spark.network.shuffle.ExternalShuffleCleanupSuite.cleanupUsesExecutor started [info] Test org.apache.spark.network.shuffle.ExternalShuffleCleanupSuite.noCleanupAndCleanup started [info] Test org.apache.spark.network.shuffle.ExternalShuffleCleanupSuite.cleanupMultipleExecutors started [info] Test run finished: 0 failed, 0 ignored, 4 total, 1.998s [info] - accuracy - String (1 second, 976 milliseconds) [info] - basic functionality (2 seconds, 417 milliseconds) [info] DistributedSuite: [info] - mergeInPlace - String (2 seconds, 14 milliseconds) [info] - large id overflow (481 milliseconds) [info] SparkUncaughtExceptionHandlerSuite: [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:128: value ENABLE_JOB_SUMMARY in object ParquetOutputFormat is deprecated: see corresponding Javadoc for more information. [warn] && conf.get(ParquetOutputFormat.ENABLE_JOB_SUMMARY) == null) { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:360: class ParquetInputSplit in package hadoop is deprecated: see corresponding Javadoc for more information. [warn] new org.apache.parquet.hadoop.ParquetInputSplit( [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:371: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information. [warn] ParquetFileReader.readFooter(sharedConf, filePath, SKIP_ROW_GROUPS).getFileMetaData [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:544: method readFooter in object ParquetFileReader is deprecated: see corresponding Javadoc for more information. [warn] ParquetFileReader.readFooter( [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs) [warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock()) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs) [warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock()) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs) [warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock()) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs) [warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock()) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala:46: class ProcessingTime in package streaming is deprecated: use Trigger.ProcessingTime(intervalMs) [warn] case class ProcessingTimeExecutor(processingTime: ProcessingTime, clock: Clock = new SystemClock()) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:119: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information. [warn] consumer.poll(0) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:139: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information. [warn] consumer.poll(0) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:187: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information. [warn] consumer.poll(0) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:217: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information. [warn] consumer.poll(0) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala:293: method poll in trait Consumer is deprecated: see corresponding Javadoc for more information. [warn] consumer.poll(0) [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaDataConsumer.scala:470: method poll in class KafkaConsumer is deprecated: see corresponding Javadoc for more information. [warn] val p = consumer.poll(pollTimeoutMs) [warn] [info] - SPARK-30310: Test uncaught RuntimeException, exitOnUncaughtException = true (1 second, 378 milliseconds) [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:138: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0. [warn] object OneHotEncoder extends DefaultParamsReadable[OneHotEncoder] { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoder.scala:141: class OneHotEncoder in package feature is deprecated: `OneHotEncoderEstimator` will be renamed `OneHotEncoder` and this `OneHotEncoder` will be removed in 3.0.0. [warn] override def load(path: String): OneHotEncoder = super.load(path) [warn] [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:53: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path [warn] if (addedClasspath != "") { [warn] [warn] /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala:54: method addedClasspath in class ILoop is deprecated: Use reset, replay or require to update class path [warn] settings.classpath append addedClasspath [warn] [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] Test org.apache.spark.network.RequestTimeoutIntegrationSuite.timeoutCleanlyClosesClient started [info] - SPARK-30310: Test uncaught RuntimeException, exitOnUncaughtException = false (1 second, 990 milliseconds) [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list [info] Packaging /home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6/examples/target/scala-2.11/jars/spark-examples_2.11-2.4.8-SNAPSHOT.jar ... [info] Done packaging. [info] ScalaTest [info] Run completed in 15 milliseconds. [info] Total number of tests run: 0 [info] Suites: completed 0, aborted 0 [info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0 [info] No tests were executed. [info] - task throws not serializable exception (5 seconds, 978 milliseconds) [info] - local-cluster format (4 milliseconds) [info] - SPARK-30310: Test uncaught OutOfMemoryError, exitOnUncaughtException = true (1 second, 696 milliseconds) [info] - accuracy - Byte array (6 seconds, 47 milliseconds) [info] - SPARK-30310: Test uncaught OutOfMemoryError, exitOnUncaughtException = false (1 second, 355 milliseconds) [info] - mergeInPlace - Byte array (1 second, 959 milliseconds) [info] - incompatible merge (1 millisecond) [info] - SPARK-30310: Test uncaught SparkFatalException(RuntimeException), exitOnUncaughtException = true (1 second, 265 milliseconds) [info] FlumePollingStreamSuite: [info] - simple groupByKey (3 seconds, 758 milliseconds) [info] - SPARK-30310: Test uncaught SparkFatalException(RuntimeException), exitOnUncaughtException = false (1 second, 249 milliseconds) [info] - SPARK-30310: Test uncaught SparkFatalException(OutOfMemoryError), exitOnUncaughtException = true (1 second, 328 milliseconds) [info] - SPARK-30310: Test uncaught SparkFatalException(OutOfMemoryError), exitOnUncaughtException = false (1 second, 303 milliseconds) [info] PipedRDDSuite: [info] - basic pipe (111 milliseconds) [info] - basic pipe with tokenization (90 milliseconds) [info] - failure in iterating over pipe input (82 milliseconds) [info] - stdin writer thread should be exited when task is finished (56 milliseconds) [info] - groupByKey where map output sizes exceed maxMbInFlight (3 seconds, 906 milliseconds) [info] Test org.apache.spark.network.RequestTimeoutIntegrationSuite.timeoutInactiveRequests started [info] - advanced pipe (708 milliseconds) [info] - pipe with empty partition (117 milliseconds) [info] - pipe with env variable (40 milliseconds) [info] - pipe with process which cannot be launched due to bad command (30 milliseconds) cat: nonexistent_file: No such file or directory cat: nonexistent_file: No such file or directory [info] - pipe with process which is launched but fails with non-zero exit status (35 milliseconds) [info] - basic pipe with separate working directory (119 milliseconds) [info] - test pipe exports map_input_file (73 milliseconds) [info] - test pipe exports mapreduce_map_input_file (30 milliseconds) [info] AccumulatorV2Suite: [info] - LongAccumulator add/avg/sum/count/isZero (1 millisecond) [info] - DoubleAccumulator add/avg/sum/count/isZero (1 millisecond) [info] - ListAccumulator (1 millisecond) [info] - LegacyAccumulatorWrapper (1 millisecond) [info] - LegacyAccumulatorWrapper with AccumulatorParam that has no equals/hashCode (3 milliseconds) [info] FileSuite: [info] - text files (429 milliseconds) [info] - text files (compressed) (538 milliseconds) [info] - SequenceFiles (313 milliseconds) [info] - SequenceFile (compressed) (393 milliseconds) [info] - SequenceFile with writable key (237 milliseconds) [info] - SequenceFile with writable value (237 milliseconds) [info] - SequenceFile with writable key and value (226 milliseconds) [info] - accumulators (3 seconds, 240 milliseconds) [info] - implicit conversions in reading SequenceFiles (345 milliseconds) [info] - object files of ints (218 milliseconds) [info] - object files of complex types (235 milliseconds) [info] - object files of classes from a JAR (1 second, 31 milliseconds) [info] - write SequenceFile using new Hadoop API (231 milliseconds) [info] - read SequenceFile using new Hadoop API (267 milliseconds) [info] - binary file input as byte array (190 milliseconds) [info] - portabledatastream caching tests (180 milliseconds) [info] - portabledatastream persist disk storage (211 milliseconds) [info] - broadcast variables (3 seconds, 72 milliseconds) [info] - portabledatastream flatmap tests (153 milliseconds) [info] - SPARK-22357 test binaryFiles minPartitions (443 milliseconds) [info] - minimum split size per node and per rack should be less than or equal to maxSplitSize (126 milliseconds) [info] - fixed record length binary file as byte array (128 milliseconds) [info] - negative binary record length should raise an exception (110 milliseconds) [info] - file caching (140 milliseconds) [info] - prevent user from overwriting the empty directory (old Hadoop API) (81 milliseconds) [info] - prevent user from overwriting the non-empty directory (old Hadoop API) (145 milliseconds) [info] - allow user to disable the output directory existence checking (old Hadoop API) (205 milliseconds) [info] - prevent user from overwriting the empty directory (new Hadoop API) (67 milliseconds) [info] - prevent user from overwriting the non-empty directory (new Hadoop API) (169 milliseconds) [info] - allow user to disable the output directory existence checking (new Hadoop API (205 milliseconds) [info] - save Hadoop Dataset through old Hadoop API (125 milliseconds) [info] - save Hadoop Dataset through new Hadoop API (136 milliseconds) [info] - Get input files via old Hadoop API (216 milliseconds) [info] - Get input files via new Hadoop API (221 milliseconds) [info] - spark.files.ignoreCorruptFiles should work both HadoopRDD and NewHadoopRDD (313 milliseconds) [info] - repeatedly failing task (3 seconds, 358 milliseconds) [info] - spark.hadoopRDD.ignoreEmptySplits work correctly (old Hadoop API) (516 milliseconds) [info] Test run finished: 0 failed, 0 ignored, 3 total, 31.875s [info] Test run started [info] Test org.apache.spark.network.ProtocolSuite.responses started [info] Test org.apache.spark.network.ProtocolSuite.requests started [info] Test run finished: 0 failed, 0 ignored, 2 total, 0.029s [info] Test run started [info] Test org.apache.spark.network.TransportClientFactorySuite.reuseClientsUpToConfigVariable started [info] Test org.apache.spark.network.TransportClientFactorySuite.reuseClientsUpToConfigVariableConcurrent started [info] - spark.hadoopRDD.ignoreEmptySplits work correctly (new Hadoop API) (445 milliseconds) [info] Test org.apache.spark.network.TransportClientFactorySuite.closeFactoryBeforeCreateClient started [info] Test org.apache.spark.network.TransportClientFactorySuite.closeBlockClientsWithFactory started [info] - spark.files.ignoreMissingFiles should work both HadoopRDD and NewHadoopRDD (417 milliseconds) [info] LogPageSuite: [info] Test org.apache.spark.network.TransportClientFactorySuite.neverReturnInactiveClients started [info] Test org.apache.spark.network.TransportClientFactorySuite.closeIdleConnectionForRequestTimeOut started [info] - flume polling test (13 seconds, 842 milliseconds) [info] - get logs simple (237 milliseconds) [info] PartiallyUnrolledIteratorSuite: [info] - join two iterators (47 milliseconds) [info] HistoryServerDiskManagerSuite: [info] - leasing space (130 milliseconds) [info] - tracking active stores (23 milliseconds) [info] - approximate size heuristic (1 millisecond) [info] - SPARK-32024: update ApplicationStoreInfo.size during initializing (32 milliseconds) [info] ExternalShuffleServiceSuite: [info] - groupByKey without compression (247 milliseconds) [info] Test org.apache.spark.network.TransportClientFactorySuite.returnDifferentClientsForDifferentServers started [info] Test run finished: 0 failed, 0 ignored, 7 total, 2.324s [info] Test run started [info] Test org.apache.spark.network.crypto.AuthIntegrationSuite.testSaslClientFallback started [info] Test org.apache.spark.network.crypto.AuthIntegrationSuite.testSaslServerFallback started [info] Test org.apache.spark.network.crypto.AuthIntegrationSuite.testAuthReplay started [info] - shuffle non-zero block size (3 seconds, 945 milliseconds) [info] - repeatedly failing task that crashes JVM (7 seconds, 150 milliseconds) [info] Test org.apache.spark.network.crypto.AuthIntegrationSuite.testNewAuth started [info] Test org.apache.spark.network.crypto.AuthIntegrationSuite.testLargeMessageEncryption started [info] Test org.apache.spark.network.crypto.AuthIntegrationSuite.testAuthFailure started [info] Test run finished: 0 failed, 0 ignored, 6 total, 5.662s [info] Test run started [info] Test org.apache.spark.network.RpcIntegrationSuite.sendRpcWithStreamConcurrently started [info] Test org.apache.spark.network.RpcIntegrationSuite.sendOneWayMessage started [info] Test org.apache.spark.network.RpcIntegrationSuite.singleRPC started [info] Test org.apache.spark.network.RpcIntegrationSuite.throwErrorRPC started [info] Test org.apache.spark.network.RpcIntegrationSuite.doubleTrouble started [info] Test org.apache.spark.network.RpcIntegrationSuite.doubleRPC started [info] Test org.apache.spark.network.RpcIntegrationSuite.returnErrorRPC started [info] Test org.apache.spark.network.RpcIntegrationSuite.sendRpcWithStreamFailures started [info] Test org.apache.spark.network.RpcIntegrationSuite.sendRpcWithStreamOneAtATime started [info] Test org.apache.spark.network.RpcIntegrationSuite.sendSuccessAndFailure started [info] Test run finished: 0 failed, 0 ignored, 10 total, 0.907s [info] Test run started [info] Test org.apache.spark.network.crypto.TransportCipherSuite.testBufferNotLeaksOnInternalError started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.014s [info] Test run started [info] Test org.apache.spark.network.TransportResponseHandlerSuite.failOutstandingStreamCallbackOnException started [info] Test org.apache.spark.network.TransportResponseHandlerSuite.failOutstandingStreamCallbackOnClose started [info] Test org.apache.spark.network.TransportResponseHandlerSuite.testActiveStreams started [info] Test org.apache.spark.network.TransportResponseHandlerSuite.handleSuccessfulFetch started [info] Test org.apache.spark.network.TransportResponseHandlerSuite.handleFailedRPC started [info] Test org.apache.spark.network.TransportResponseHandlerSuite.handleFailedFetch started [info] Test org.apache.spark.network.TransportResponseHandlerSuite.handleSuccessfulRPC started [info] Test org.apache.spark.network.TransportResponseHandlerSuite.clearAllOutstandingRequests started [info] Test run finished: 0 failed, 0 ignored, 8 total, 0.022s [info] Test run started [info] Test org.apache.spark.network.util.TransportFrameDecoderSuite.testEmptyFrame started [info] Test org.apache.spark.network.util.TransportFrameDecoderSuite.testNegativeFrameSize started [info] Test org.apache.spark.network.util.TransportFrameDecoderSuite.testSplitLengthField started [info] Test org.apache.spark.network.util.TransportFrameDecoderSuite.testFrameDecoding started [info] Test org.apache.spark.network.util.TransportFrameDecoderSuite.testInterception started [info] Test org.apache.spark.network.util.TransportFrameDecoderSuite.testRetainedFrames started [info] Test run finished: 0 failed, 0 ignored, 6 total, 0.086s [info] Test run started [info] Test org.apache.spark.network.protocol.MessageWithHeaderSuite.testDeallocateReleasesManagedBuffer started [info] Test org.apache.spark.network.protocol.MessageWithHeaderSuite.testByteBufBody started [info] Test org.apache.spark.network.protocol.MessageWithHeaderSuite.testShortWrite started [info] Test org.apache.spark.network.protocol.MessageWithHeaderSuite.testCompositeByteBufBodySingleBuffer started [info] Test org.apache.spark.network.protocol.MessageWithHeaderSuite.testCompositeByteBufBodyMultipleBuffers started [info] Test org.apache.spark.network.protocol.MessageWithHeaderSuite.testSingleWrite started [info] Test run finished: 0 failed, 0 ignored, 6 total, 0.003s [info] Test run started [info] Test org.apache.spark.network.sasl.SparkSaslSuite.testNonMatching started [info] Test org.apache.spark.network.sasl.SparkSaslSuite.testSaslAuthentication started [info] Test org.apache.spark.network.sasl.SparkSaslSuite.testEncryptedMessageChunking started [info] Test org.apache.spark.network.sasl.SparkSaslSuite.testServerAlwaysEncrypt started [info] Test org.apache.spark.network.sasl.SparkSaslSuite.testDataEncryptionIsActuallyEnabled started [info] Test org.apache.spark.network.sasl.SparkSaslSuite.testFileRegionEncryption started [info] - shuffle serializer (3 seconds, 566 milliseconds) [info] Test org.apache.spark.network.sasl.SparkSaslSuite.testSaslEncryption started [info] Test org.apache.spark.network.sasl.SparkSaslSuite.testDelegates started [info] Test org.apache.spark.network.sasl.SparkSaslSuite.testEncryptedMessage started [info] Test org.apache.spark.network.sasl.SparkSaslSuite.testMatching started [info] Test org.apache.spark.network.sasl.SparkSaslSuite.testRpcHandlerDelegate started [info] Test run finished: 0 failed, 0 ignored, 11 total, 0.505s [info] Test run started [info] Test org.apache.spark.network.util.CryptoUtilsSuite.testConfConversion started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.0s [info] Test run started [info] Test org.apache.spark.network.TransportRequestHandlerSuite.handleFetchRequestAndStreamRequest started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.017s [info] Test run started [info] Test org.apache.spark.network.crypto.AuthMessagesSuite.testServerResponse started [info] Test org.apache.spark.network.crypto.AuthMessagesSuite.testClientChallenge started [info] Test run finished: 0 failed, 0 ignored, 2 total, 0.001s [info] Test run started [info] Test org.apache.spark.network.util.NettyMemoryMetricsSuite.testGeneralNettyMemoryMetrics started [info] Test org.apache.spark.network.util.NettyMemoryMetricsSuite.testAdditionalMetrics started [info] Test run finished: 0 failed, 0 ignored, 2 total, 0.129s [info] Test run started [info] Test org.apache.spark.network.ChunkFetchIntegrationSuite.fetchNonExistentChunk started [info] Test org.apache.spark.network.ChunkFetchIntegrationSuite.fetchFileChunk started [info] Test org.apache.spark.network.ChunkFetchIntegrationSuite.fetchBothChunks started [info] Test org.apache.spark.network.ChunkFetchIntegrationSuite.fetchChunkAndNonExistent started [info] Test org.apache.spark.network.ChunkFetchIntegrationSuite.fetchBufferChunk started [info] Test run finished: 0 failed, 0 ignored, 5 total, 0.441s [info] Test run started [info] Test org.apache.spark.network.StreamSuite.testSingleStream started [info] Test org.apache.spark.network.StreamSuite.testMultipleStreams started [info] Test org.apache.spark.network.StreamSuite.testConcurrentStreams started [info] Test org.apache.spark.network.StreamSuite.testZeroLengthStream started [info] Test run finished: 0 failed, 0 ignored, 4 total, 0.292s [info] MesosSchedulerUtilsSuite: [info] - use at-least minimum overhead (332 milliseconds) [info] - use overhead if it is greater than minimum value (7 milliseconds) [info] - use spark.mesos.executor.memoryOverhead (if set) (3 milliseconds) [info] - parse a non-empty constraint string correctly (28 milliseconds) [info] - parse an empty constraint string correctly (1 millisecond) [info] - throw an exception when the input is malformed (5 milliseconds) [info] - empty values for attributes' constraints matches all values (40 milliseconds) [info] - subset match is performed for set attributes (7 milliseconds) [info] - less than equal match is performed on scalar attributes (5 milliseconds) [info] - contains match is performed for range attributes (44 milliseconds) [info] - equality match is performed for text attributes (2 milliseconds) [info] - Port reservation is done correctly with user specified ports only (40 milliseconds) [info] - Port reservation is done correctly with all random ports (2 milliseconds) [info] - Port reservation is done correctly with user specified ports only - multiple ranges (3 milliseconds) [info] - Port reservation is done correctly with all random ports - multiple ranges (2 milliseconds) [info] - Principal specified via spark.mesos.principal (21 milliseconds) [info] - Principal specified via spark.mesos.principal.file (26 milliseconds) [info] - Principal specified via spark.mesos.principal.file that does not exist (6 milliseconds) [info] - Principal specified via SPARK_MESOS_PRINCIPAL (7 milliseconds) [info] - Principal specified via SPARK_MESOS_PRINCIPAL_FILE (2 milliseconds) [info] - Principal specified via SPARK_MESOS_PRINCIPAL_FILE that does not exist (2 milliseconds) [info] - Secret specified via spark.mesos.secret (2 milliseconds) [info] - Principal specified via spark.mesos.secret.file (2 milliseconds) [info] - Principal specified via spark.mesos.secret.file that does not exist (2 milliseconds) [info] - Principal specified via SPARK_MESOS_SECRET (1 millisecond) [info] - Principal specified via SPARK_MESOS_SECRET_FILE (1 millisecond) [info] - Secret specified with no principal (2 milliseconds) [info] - Principal specification preference (1 millisecond) [info] - Secret specification preference (1 millisecond) [info] MesosSchedulerBackendUtilSuite: [info] - ContainerInfo fails to parse invalid docker parameters (163 milliseconds) [info] - ContainerInfo parses docker parameters (3 milliseconds) [info] - SPARK-28778 ContainerInfo respects Docker network configuration (29 milliseconds) [info] MesosFineGrainedSchedulerBackendSuite: [info] - weburi is set in created scheduler driver (61 milliseconds) [info] - Use configured mesosExecutor.cores for ExecutorInfo (68 milliseconds) [info] - check spark-class location correctly (9 milliseconds) [info] - spark docker properties correctly populate the DockerInfo message (20 milliseconds) [info] - mesos resource offers result in launching tasks (81 milliseconds) [info] - can handle multiple roles (10 milliseconds) [info] MesosCoarseGrainedSchedulerBackendSuite: [info] - flume polling test multiple hosts (13 seconds, 840 milliseconds) [info] - mesos supports killing and limiting executors (1 second, 718 milliseconds) [info] FlumeStreamSuite: [info] - mesos supports killing and relaunching tasks with executors (204 milliseconds) [info] - mesos supports spark.executor.cores (136 milliseconds) [info] - mesos supports unset spark.executor.cores (147 milliseconds) [info] - zero sized blocks (6 seconds, 411 milliseconds) [info] - mesos does not acquire more than spark.cores.max (100 milliseconds) [info] - mesos does not acquire gpus if not specified (109 milliseconds) [info] - mesos does not acquire more than spark.mesos.gpus.max (93 milliseconds) [info] - mesos declines offers that violate attribute constraints (128 milliseconds) [info] - flume input stream (1 second, 160 milliseconds) [info] - mesos declines offers with a filter when reached spark.cores.max (89 milliseconds) [info] - mesos declines offers with a filter when maxCores not a multiple of executor.cores (92 milliseconds) [info] - mesos declines offers with a filter when reached spark.cores.max with executor.cores (118 milliseconds) [info] - mesos assigns tasks round-robin on offers (111 milliseconds) [info] - mesos creates multiple executors on a single slave (143 milliseconds) [info] - mesos doesn't register twice with the same shuffle service (80 milliseconds) [info] - flume input compressed stream (998 milliseconds) [info] - Port offer decline when there is no appropriate range (117 milliseconds) [info] Test run started [info] Test org.apache.spark.streaming.flume.JavaFlumeStreamSuite.testFlumeStream started [info] - Port offer accepted when ephemeral ports are used (85 milliseconds) [info] - Port offer accepted with user defined port numbers (97 milliseconds) [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.198s [info] Test run started [info] Test org.apache.spark.streaming.flume.JavaFlumePollingStreamSuite.testFlumeStream started [info] - mesos kills an executor when told (98 milliseconds) [info] - repeatedly failing task that crashes JVM with a zero exit code (SPARK-16925) (11 seconds, 14 milliseconds) [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.14s [info] - weburi is set in created scheduler driver (97 milliseconds) [info] - failover timeout is set in created scheduler driver (93 milliseconds) [info] - honors unset spark.mesos.containerizer (228 milliseconds) [info] - honors spark.mesos.containerizer="mesos" (183 milliseconds) [info] - docker settings are reflected in created tasks (198 milliseconds) [info] - force-pull-image option is disabled by default (163 milliseconds) [info] LabelPropagationSuite: [info] - mesos supports spark.executor.uri (144 milliseconds) [info] - mesos supports setting fetcher cache (157 milliseconds) [info] - mesos supports disabling fetcher cache (125 milliseconds) [info] - mesos sets task name to spark.app.name (120 milliseconds) [info] - mesos sets configurable labels on tasks (120 milliseconds) [info] - mesos supports spark.mesos.network.name and spark.mesos.network.labels (89 milliseconds) [info] - SPARK-28778 '--hostname' shouldn't be set for executor when virtual network is enabled (733 milliseconds) [info] - supports spark.scheduler.minRegisteredResourcesRatio (212 milliseconds) [info] - zero sized blocks without kryo (6 seconds, 130 milliseconds) [info] - caching (encryption = off) (5 seconds, 234 milliseconds) [info] - shuffle on mutable pairs (3 seconds, 538 milliseconds) [info] - caching (encryption = on) (4 seconds, 266 milliseconds) [info] - Label Propagation (8 seconds, 83 milliseconds) [info] BytecodeUtilsSuite: [info] - closure invokes a method (7 milliseconds) [info] - closure inside a closure invokes a method (2 milliseconds) [info] - closure inside a closure inside a closure invokes a method (3 milliseconds) [info] - closure calling a function that invokes a method (2 milliseconds) [info] - closure calling a function that invokes a method which uses another closure (3 milliseconds) [info] - nested closure (3 milliseconds) [info] PregelSuite: [info] - supports data locality with dynamic allocation (6 seconds, 229 milliseconds) [info] - Creates an env-based reference secrets. (157 milliseconds) [info] - Creates an env-based value secrets. (81 milliseconds) [info] - Creates file-based reference secrets. (79 milliseconds) [info] - Creates a file-based value secrets. (89 milliseconds) [info] - 1 iteration (748 milliseconds) [info] MesosClusterSchedulerSuite: [info] - can queue drivers (44 milliseconds) [info] - can kill queued drivers (27 milliseconds) [info] - can handle multiple roles (44 milliseconds) [info] - escapes commandline args for the shell (51 milliseconds) [info] - supports spark.mesos.driverEnv.* (26 milliseconds) [info] - supports spark.mesos.network.name and spark.mesos.network.labels (26 milliseconds) [info] - supports setting fetcher cache on the dispatcher (27 milliseconds) [info] - supports setting fetcher cache in the submission (26 milliseconds) [info] - supports disabling fetcher cache (30 milliseconds) [info] - accept/decline offers with driver constraints (49 milliseconds) [info] - supports spark.mesos.driver.labels (41 milliseconds) [info] - can kill supervised drivers (42 milliseconds) [info] - sorting on mutable pairs (3 seconds, 424 milliseconds) [info] - SPARK-27347: do not restart outdated supervised drivers (1 second, 542 milliseconds) [info] - Declines offer with refuse seconds = 120. (25 milliseconds) [info] - Creates an env-based reference secrets. (23 milliseconds) [info] - Creates an env-based value secrets. (30 milliseconds) [info] - Creates file-based reference secrets. (29 milliseconds) [info] - Creates a file-based value secrets. (37 milliseconds) [info] MesosClusterDispatcherSuite: [info] - prints usage on empty input (14 milliseconds) [info] - prints usage with only --help (2 milliseconds) [info] - prints error with unrecognized options (2 milliseconds) [info] MesosClusterManagerSuite: [info] - mesos fine-grained (59 milliseconds) [info] - mesos coarse-grained (63 milliseconds) [info] - chain propagation (2 seconds, 388 milliseconds) [info] PeriodicGraphCheckpointerSuite: [info] - mesos with zookeeper (69 milliseconds) [info] - Persisting (137 milliseconds) [info] - mesos with i/o encryption throws error (133 milliseconds) [info] MesosClusterDispatcherArgumentsSuite: (spark.testing,true) (spark.ui.showConsoleProgress,false) (spark.master.rest.enabled,false) (spark.test.home,/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6) (spark.ui.enabled,false) (spark.unsafe.exceptionOnMemoryLeak,true) (spark.mesos.key2,value2) (spark.memory.debugFill,true) (spark.port.maxRetries,100) [info] - test if spark config args are passed successfully (15 milliseconds) (spark.testing,true) (spark.ui.showConsoleProgress,false) (spark.master.rest.enabled,false) (spark.test.home,/home/jenkins/workspace/spark-branch-2.4-test-sbt-hadoop-2.6) (spark.ui.enabled,false) (spark.unsafe.exceptionOnMemoryLeak,true) (spark.memory.debugFill,true) (spark.port.maxRetries,100) [info] - test non conf settings (2 milliseconds) [info] MesosProtoUtilsSuite: [info] - mesosLabels (1 millisecond) [info] - caching on disk (encryption = off) (3 seconds, 747 milliseconds) [info] ReliableKafkaStreamSuite: [info] - cogroup using mutable pairs (3 seconds, 283 milliseconds) [info] - Checkpointing (2 seconds, 835 milliseconds) [info] ConnectedComponentsSuite: [info] - caching on disk (encryption = on) (3 seconds, 950 milliseconds) [info] - subtract mutable pairs (3 seconds, 690 milliseconds) [info] - Reliable Kafka input stream with single topic (2 seconds, 229 milliseconds) [info] - Grid Connected Components (3 seconds, 272 milliseconds) [info] - Reliable Kafka input stream with multiple topics (869 milliseconds) [info] KafkaStreamSuite: [info] - caching in memory, replicated (encryption = off) (4 seconds, 309 milliseconds) [info] - sort with Java non serializable class - Kryo (3 seconds, 804 milliseconds) [info] - Reverse Grid Connected Components (3 seconds, 147 milliseconds) [info] - Kafka input stream (1 second, 357 milliseconds) [info] DirectKafkaStreamSuite: [info] - basic stream receiving with multiple topics and smallest starting offset (940 milliseconds) [info] - receiving from largest starting offset (250 milliseconds) [info] - creating stream by offset (306 milliseconds) [info] - sort with Java non serializable class - Java (2 seconds, 960 milliseconds) [info] - caching in memory, replicated (encryption = off) (with replication as stream) (3 seconds, 871 milliseconds) [info] - shuffle with different compression settings (SPARK-3426) (451 milliseconds) [info] - [SPARK-4085] rerun map stage if reduce stage cannot find its local shuffle file (448 milliseconds) [info] - cannot find its local shuffle file if no execution of the stage and rerun shuffle (133 milliseconds) [info] - metrics for shuffle without aggregation (502 milliseconds) [info] - offset recovery (2 seconds, 135 milliseconds) [info] - Chain Connected Components (4 seconds, 877 milliseconds) [info] - metrics for shuffle with aggregation (660 milliseconds) [info] - multiple simultaneous attempts for one task (SPARK-8029) (87 milliseconds) [info] - Direct Kafka stream report input information (657 milliseconds) [info] - maxMessagesPerPartition with backpressure disabled (68 milliseconds) [info] - maxMessagesPerPartition with no lag (103 milliseconds) [info] - maxMessagesPerPartition respects max rate (95 milliseconds) [info] - using rate controller (588 milliseconds) [info] - use backpressure.initialRate with backpressure (266 milliseconds) [info] - backpressure.initialRate should honor maxRatePerPartition (220 milliseconds) [info] - maxMessagesPerPartition with zero offset and rate equal to one (85 milliseconds) [info] KafkaRDDSuite: [info] - caching in memory, replicated (encryption = on) (4 seconds, 61 milliseconds) [info] - basic usage (215 milliseconds) [info] - iterator boundary conditions (214 milliseconds) [info] KafkaClusterSuite: [info] - metadata apis (12 milliseconds) [info] - leader offset apis (5 milliseconds) [info] - consumer offset apis (16 milliseconds) [info] Test run started [info] Test org.apache.spark.streaming.kafka.JavaKafkaStreamSuite.testKafkaStream started [info] - using external shuffle service (4 seconds, 128 milliseconds) [info] ConfigEntrySuite: [info] - conf entry: int (1 millisecond) [info] - conf entry: long (1 millisecond) [info] - conf entry: double (0 milliseconds) [info] - conf entry: boolean (0 milliseconds) [info] - conf entry: optional (1 millisecond) [info] - conf entry: fallback (0 milliseconds) [info] - conf entry: time (1 millisecond) [info] - conf entry: bytes (1 millisecond) [info] - Reverse Chain Connected Components (4 seconds, 752 milliseconds) [info] - conf entry: regex (1 millisecond) [info] - conf entry: string seq (1 millisecond) [info] - conf entry: int seq (0 milliseconds) [info] - conf entry: transformation (1 millisecond) [info] - conf entry: checkValue() (2 milliseconds) [info] - conf entry: valid values check (1 millisecond) [info] - conf entry: conversion error (1 millisecond) [info] - default value handling is null-safe (0 milliseconds) [info] - variable expansion of spark config entries (5 milliseconds) [info] - conf entry : default function (1 millisecond) [info] - conf entry: alternative keys (0 milliseconds) [info] - onCreate (1 millisecond) [info] InputOutputMetricsSuite: [info] - input metrics for old hadoop with coalesce (182 milliseconds) [info] - input metrics with cache and coalesce (138 milliseconds) [info] - input metrics for new Hadoop API with coalesce (91 milliseconds) [info] - input metrics when reading text file (41 milliseconds) [info] - Connected Components on a Toy Connected Graph (775 milliseconds) [info] VertexRDDSuite: [info] - input metrics on records read - simple (112 milliseconds) [info] - input metrics on records read - more stages (116 milliseconds) [info] - input metrics on records - New Hadoop API (29 milliseconds) [info] Test run finished: 0 failed, 0 ignored, 1 total, 1.719s [info] Test run started [info] Test org.apache.spark.streaming.kafka.JavaKafkaRDDSuite.testKafkaRDD started [info] - input metrics on records read with cache (118 milliseconds) [info] - filter (454 milliseconds) [info] - input read/write and shuffle read/write metrics all line up (173 milliseconds) [info] - input metrics with interleaved reads (298 milliseconds) [info] - mapValues (458 milliseconds) [info] - output metrics on records written (85 milliseconds) [info] - output metrics on records written - new Hadoop API (100 milliseconds) [info] - caching in memory, replicated (encryption = on) (with replication as stream) (3 seconds, 914 milliseconds) [info] - minus (249 milliseconds) [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.961s [info] Test run started [info] Test org.apache.spark.streaming.kafka.JavaDirectKafkaStreamSuite.testKafkaStream started [info] - output metrics when writing text file (68 milliseconds) [info] - input metrics with old CombineFileInputFormat (41 milliseconds) [info] - input metrics with new CombineFileInputFormat (57 milliseconds) [info] - minus with RDD[(VertexId, VD)] (250 milliseconds) [info] - input metrics with old Hadoop API in different thread (77 milliseconds) [info] - input metrics with new Hadoop API in different thread (109 milliseconds) [info] AppStatusStoreSuite: [info] - quantile calculation: 1 task (28 milliseconds) [info] - quantile calculation: few tasks (5 milliseconds) [info] - quantile calculation: more tasks (19 milliseconds) [info] - quantile calculation: lots of tasks (85 milliseconds) [info] - quantile calculation: custom quantiles (43 milliseconds) [info] - minus with non-equal number of partitions (485 milliseconds) [info] - quantile cache (113 milliseconds) [info] - SPARK-28638: only successful tasks have taskSummary when with in memory kvstore (1 millisecond) [info] - SPARK-28638: summary should contain successful tasks only when with in memory kvstore (10 milliseconds) [info] CountEvaluatorSuite: [info] - test count 0 (1 millisecond) [info] - test count >= 1 (30 milliseconds) [info] TaskResultGetterSuite: [info] - handling results smaller than max RPC message size (57 milliseconds) [info] - diff (408 milliseconds) [info] Test run finished: 0 failed, 0 ignored, 1 total, 1.164s [info] - handling results larger than max RPC message size (365 milliseconds) [info] - handling total size of results larger than maxResultSize (116 milliseconds) [info] - diff with RDD[(VertexId, VD)] (440 milliseconds) [info] - task retried if result missing from block manager (321 milliseconds) [info] - diff vertices with non-equal number of partitions (368 milliseconds) [info] - failed task deserialized with the correct classloader (SPARK-11195) (299 milliseconds) [info] - task result size is set on the driver, not the executors (117 milliseconds) Exception in thread "task-result-getter-0" java.lang.NoClassDefFoundError at org.apache.spark.scheduler.UndeserializableException.readObject(TaskResultGetterSuite.scala:304) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1170) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2178) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431) at org.apache.spark.ThrowableSerializationWrapper.readObject(TaskEndReason.scala:193) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1170) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2178) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)[info] - failed task is handled when error occurs deserializing the reason (63 milliseconds) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431) at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75) at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114) at org.apache.spark.scheduler.TaskResultGetter$$anon$4$$anonfun$run$2.apply$mcV$sp(TaskResultGetter.scala:142) at org.apache.spark.scheduler.TaskResultGetter$$anon$4$$anonfun$run$2.apply(TaskResultGetter.scala:138) at org.apache.spark.scheduler.TaskResultGetter$$anon$4$$anonfun$run$2.apply(TaskResultGetter.scala:138) at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1945) at org.apache.spark.scheduler.TaskResultGetter$$anon$4.run(TaskResultGetter.scala:138) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) [info] SerializerPropertiesSuite: [info] - JavaSerializer does not support relocation (3 milliseconds) [info] KafkaRDDSuite: [info] - KryoSerializer supports relocation when auto-reset is enabled (107 milliseconds) [info] - KryoSerializer does not support relocation when auto-reset is disabled (12 milliseconds) [info] DriverRunnerTest: [info] - Process succeeds instantly (48 milliseconds) [info] - Process failing several times and then succeeding (25 milliseconds) [info] - leftJoin (539 milliseconds) [info] - Process doesn't restart if not supervised (22 milliseconds) [info] - Process doesn't restart if killed (23 milliseconds) [info] - Reset of backoff counter (24 milliseconds) [info] - Kill process finalized with state KILLED (33 milliseconds) [info] - Finalized with state FINISHED (34 milliseconds) [info] - Finalized with state FAILED (51 milliseconds) [info] - Handle exception starting process (34 milliseconds) [info] MapOutputTrackerSuite: [info] - master start and stop (70 milliseconds) [info] - master register shuffle and fetch (74 milliseconds) [info] - leftJoin vertices with non-equal number of partitions (424 milliseconds) [info] - master register and unregister shuffle (57 milliseconds) [info] - master register shuffle and unregister map output and fetch (81 milliseconds) [info] - remote fetch (181 milliseconds) [info] - remote fetch below max RPC message size (78 milliseconds) [info] - min broadcast size exceeds max RPC message size (27 milliseconds) [info] - getLocationsWithLargestOutputs with multiple outputs in same machine (88 milliseconds) [info] - innerJoin (564 milliseconds) [info] - caching in memory, serialized, replicated (encryption = off) (3 seconds, 512 milliseconds) [info] - innerJoin vertices with the non-equal number of partitions (295 milliseconds) [info] - aggregateUsingIndex (425 milliseconds) [info] - mergeFunc (170 milliseconds) [info] - cache, getStorageLevel (76 milliseconds) [info] - checkpoint (666 milliseconds) [info] - count (439 milliseconds) [info] EdgePartitionSuite: [info] - reverse (8 milliseconds) [info] - map (1 millisecond) [info] - filter (4 milliseconds) [info] - groupEdges (2 milliseconds) [info] - innerJoin (2 milliseconds) [info] - isActive, numActives, replaceActives (0 milliseconds) [info] - tripletIterator (0 milliseconds) [info] - serialization (16 milliseconds) [info] EdgeSuite: [info] - compare (1 millisecond) [info] PageRankSuite: [info] - caching in memory, serialized, replicated (encryption = off) (with replication as stream) (3 seconds, 610 milliseconds) [info] - Star PageRank (1 second, 838 milliseconds) [info] - caching in memory, serialized, replicated (encryption = on) (3 seconds, 542 milliseconds) [info] - Star PersonalPageRank (3 seconds, 697 milliseconds) [info] - caching in memory, serialized, replicated (encryption = on) (with replication as stream) (4 seconds, 303 milliseconds) [info] - caching on disk, replicated (encryption = off) (4 seconds, 419 milliseconds) [info] - remote fetch using broadcast (17 seconds, 826 milliseconds) [info] - equally divide map statistics tasks (40 milliseconds) [info] - zero-sized blocks should be excluded when getMapSizesByExecutorId (87 milliseconds) [info] PythonBroadcastSuite: [info] - PythonBroadcast can be serialized with Kryo (SPARK-4882) (18 milliseconds) [info] ExecutorRunnerTest: [info] - command includes appId (26 milliseconds) [info] CompressionCodecSuite: [info] - default compression codec (4 milliseconds) [info] - lz4 compression codec (1 millisecond) [info] - lz4 compression codec short form (1 millisecond) [info] - lz4 supports concatenation of serialized streams (2 milliseconds) [info] - lzf compression codec (11 milliseconds) [info] - lzf compression codec short form (1 millisecond) [info] - lzf supports concatenation of serialized streams (1 millisecond) [info] - snappy compression codec (32 milliseconds) [info] - snappy compression codec short form (2 milliseconds) [info] - snappy supports concatenation of serialized streams (1 millisecond) [info] - zstd compression codec (22 milliseconds) [info] - zstd compression codec short form (1 millisecond) [info] - zstd supports concatenation of serialized zstd (0 milliseconds) [info] - bad compression codec (1 millisecond) [info] MetricsSystemSuite: [info] - MetricsSystem with default config (2 milliseconds) [info] - MetricsSystem with sources add (7 milliseconds) [info] - MetricsSystem with Driver instance (1 millisecond) [info] - MetricsSystem with Driver instance and spark.app.id is not set (2 milliseconds) [info] - MetricsSystem with Driver instance and spark.executor.id is not set (2 milliseconds) [info] - MetricsSystem with Executor instance (2 milliseconds) [info] - MetricsSystem with Executor instance and spark.app.id is not set (1 millisecond) [info] - MetricsSystem with Executor instance and spark.executor.id is not set (1 millisecond) [info] - MetricsSystem with instance which is neither Driver nor Executor (2 milliseconds) [info] - MetricsSystem with Executor instance, with custom namespace (1 millisecond) [info] - MetricsSystem with Executor instance, custom namespace which is not set (1 millisecond) [info] - MetricsSystem with Executor instance, custom namespace, spark.executor.id not set (1 millisecond) [info] - MetricsSystem with non-driver, non-executor instance with custom namespace (2 milliseconds) [info] ConfigReaderSuite: [info] - variable expansion (1 millisecond) [info] - circular references (1 millisecond) [info] - spark conf provider filters config keys (0 milliseconds) [info] DoubleRDDSuite: [info] - sum (44 milliseconds) [info] - WorksOnEmpty (43 milliseconds) [info] - WorksWithOutOfRangeWithOneBucket (33 milliseconds) [info] - WorksInRangeWithOneBucket (33 milliseconds) [info] - WorksInRangeWithOneBucketExactMatch (32 milliseconds) [info] - WorksWithOutOfRangeWithTwoBuckets (25 milliseconds) [info] - WorksWithOutOfRangeWithTwoUnEvenBuckets (14 milliseconds) [info] - WorksInRangeWithTwoBuckets (25 milliseconds) [info] - WorksInRangeWithTwoBucketsAndNaN (27 milliseconds) [info] - WorksInRangeWithTwoUnevenBuckets (34 milliseconds) [info] - WorksMixedRangeWithTwoUnevenBuckets (15 milliseconds) [info] - WorksMixedRangeWithFourUnevenBuckets (16 milliseconds) [info] - WorksMixedRangeWithUnevenBucketsAndNaN (15 milliseconds) [info] - WorksMixedRangeWithUnevenBucketsAndNaNAndNaNRange (18 milliseconds) [info] - WorksMixedRangeWithUnevenBucketsAndNaNAndNaNRangeAndInfinity (16 milliseconds) [info] - WorksWithOutOfRangeWithInfiniteBuckets (15 milliseconds) [info] - ThrowsExceptionOnInvalidBucketArray (2 milliseconds) [info] - WorksWithoutBucketsBasic (48 milliseconds) [info] - WorksWithoutBucketsBasicSingleElement (27 milliseconds) [info] - WorksWithoutBucketsBasicNoRange (29 milliseconds) [info] - WorksWithoutBucketsBasicTwo (28 milliseconds) [info] - WorksWithDoubleValuesAtMinMax (69 milliseconds) [info] - WorksWithoutBucketsWithMoreRequestedThanElements (29 milliseconds) [info] - WorksWithoutBucketsForLargerDatasets (42 milliseconds) [info] - WorksWithoutBucketsWithNonIntegralBucketEdges (34 milliseconds) [info] - Grid PageRank (11 seconds, 761 milliseconds) [info] - WorksWithHugeRange (406 milliseconds) [info] - caching on disk, replicated (encryption = off) (with replication as stream) (3 seconds, 539 milliseconds) [info] - ThrowsExceptionOnInvalidRDDs (40 milliseconds) [info] NextIteratorSuite: [info] - one iteration (4 milliseconds) [info] - two iterations (1 millisecond) [info] - empty iteration (1 millisecond) [info] - close is called once for empty iterations (0 milliseconds) [info] - close is called once for non-empty iterations (0 milliseconds) [info] SparkSubmitSuite: [info] - prints usage on empty input (21 milliseconds) [info] - prints usage with only --help (3 milliseconds) [info] - prints error with unrecognized options (1 millisecond) [info] - handle binary specified but not class (115 milliseconds) [info] - handles arguments with --key=val (4 milliseconds) [info] - handles arguments to user program (1 millisecond) [info] - handles arguments to user program with name collision (1 millisecond) [info] - print the right queue name (10 milliseconds) [info] - SPARK-24241: do not fail fast if executor num is 0 when dynamic allocation is enabled (2 milliseconds) [info] - specify deploy mode through configuration (253 milliseconds) [info] - handles YARN cluster mode (20 milliseconds) [info] - handles YARN client mode (41 milliseconds) [info] - handles standalone cluster mode (15 milliseconds) [info] - handles legacy standalone cluster mode (16 milliseconds) [info] - handles standalone client mode (35 milliseconds) [info] - handles mesos client mode (40 milliseconds) [info] - handles k8s cluster mode (22 milliseconds) [info] - handles confs with flag equivalents (19 milliseconds) [info] - SPARK-21568 ConsoleProgressBar should be enabled only in shells (72 milliseconds) [info] - Chain PageRank (2 seconds, 842 milliseconds) [info] - caching on disk, replicated (encryption = on) (3 seconds, 601 milliseconds) [info] - launch simple application with spark-submit (5 seconds, 143 milliseconds) [info] - Chain PersonalizedPageRank (3 seconds, 659 milliseconds) [info] - caching on disk, replicated (encryption = on) (with replication as stream) (3 seconds, 672 milliseconds) [info] - caching in memory and disk, replicated (encryption = off) (3 seconds, 546 milliseconds) [info] - launch simple application with spark-submit with redaction (5 seconds, 141 milliseconds) [info] - caching in memory and disk, replicated (encryption = off) (with replication as stream) (3 seconds, 852 milliseconds) [info] - caching in memory and disk, replicated (encryption = on) (4 seconds, 37 milliseconds) [info] - includes jars passed in through --jars (9 seconds, 980 milliseconds) [info] - Loop with source PageRank (15 seconds, 219 milliseconds) [info] - caching in memory and disk, replicated (encryption = on) (with replication as stream) (4 seconds, 90 milliseconds) [info] - caching in memory and disk, serialized, replicated (encryption = off) (3 seconds, 640 milliseconds) [info] - caching in memory and disk, serialized, replicated (encryption = off) (with replication as stream) (3 seconds, 727 milliseconds) [info] - includes jars passed in through --packages (9 seconds, 560 milliseconds) [info] - caching in memory and disk, serialized, replicated (encryption = on) (3 seconds, 517 milliseconds) [info] - Loop with sink PageRank (13 seconds, 980 milliseconds) [info] EdgeRDDSuite: [info] - cache, getStorageLevel (81 milliseconds) [info] - checkpointing (226 milliseconds) [info] - count (116 milliseconds) [info] GraphSuite: [info] - Graph.fromEdgeTuples (240 milliseconds) [info] - Graph.fromEdges (111 milliseconds) [info] - Graph.apply (294 milliseconds) [info] - triplets (342 milliseconds) [info] - caching in memory and disk, serialized, replicated (encryption = on) (with replication as stream) (3 seconds, 727 milliseconds) [info] - includes jars passed through spark.jars.packages and spark.jars.repositories (9 seconds, 814 milliseconds) [info] - correctly builds R packages included in a jar with --packages !!! IGNORED !!! [info] - compute without caching when no partitions fit in memory (3 seconds, 404 milliseconds) [info] - partitionBy (7 seconds, 741 milliseconds) [info] - mapVertices (276 milliseconds) [info] - compute when only some partitions fit in memory (3 seconds, 632 milliseconds) [info] - mapVertices changing type with same erased type (293 milliseconds) [info] - mapEdges (166 milliseconds) [info] - mapTriplets (328 milliseconds) [info] - reverse (304 milliseconds) [info] - reverse with join elimination (256 milliseconds) [info] - subgraph (369 milliseconds) [info] - mask (285 milliseconds) [info] - groupEdges (392 milliseconds) [info] - aggregateMessages (389 milliseconds) [info] - passing environment variables to cluster (2 seconds, 902 milliseconds) [info] - outerJoinVertices (558 milliseconds) [info] - more edge partitions than vertex partitions (249 milliseconds) [info] - checkpoint (376 milliseconds) [info] - cache, getStorageLevel (59 milliseconds) [info] - non-default number of edge partitions (273 milliseconds) [info] - unpersist graph RDD (502 milliseconds) [info] - SPARK-14219: pickRandomVertex (187 milliseconds) [info] ShortestPathsSuite: [info] - include an external JAR in SparkR (10 seconds, 74 milliseconds) [info] - Shortest Path Computations (664 milliseconds) [info] GraphOpsSuite: [info] - resolves command line argument paths correctly (118 milliseconds) [info] - ambiguous archive mapping results in error message (22 milliseconds) [info] - joinVertices (283 milliseconds) [info] - resolves config paths correctly (175 milliseconds) [info] - collectNeighborIds (465 milliseconds) [info] - removeSelfEdges (188 milliseconds) [info] - filter (283 milliseconds) [info] - convertToCanonicalEdges (199 milliseconds) [info] - collectEdgesCycleDirectionOut (398 milliseconds) [info] - collectEdgesCycleDirectionIn (464 milliseconds) [info] - collectEdgesCycleDirectionEither (467 milliseconds) [info] - user classpath first in driver (2 seconds, 481 milliseconds) [info] - SPARK_CONF_DIR overrides spark-defaults.conf (7 milliseconds) [info] - support glob path (40 milliseconds) [info] - SPARK-27575: yarn confs should merge new value with existing value (51 milliseconds) [info] - downloadFile - invalid url (35 milliseconds) [info] - downloadFile - file doesn't exist (32 milliseconds) [info] - downloadFile does not download local file (23 milliseconds) [info] - download one file to local (35 milliseconds) [info] - download list of files to local (33 milliseconds) [info] - remove copies of application jar from classpath (35 milliseconds) [info] - Avoid re-upload remote resources in yarn client mode (38 milliseconds) [info] - download remote resource if it is not supported by yarn service (40 milliseconds) [info] - collectEdgesChainDirectionOut (415 milliseconds) [info] - avoid downloading remote resource if it is supported by yarn service (38 milliseconds) [info] - recover from node failures (5 seconds, 866 milliseconds) [info] - force download from blacklisted schemes (37 milliseconds) [info] - force download for all the schemes (38 milliseconds) [info] - start SparkApplication without modifying system properties (39 milliseconds) [info] - support --py-files/spark.submit.pyFiles in non pyspark application (104 milliseconds) [info] - handles natural line delimiters in --properties-file and --conf uniformly (37 milliseconds) [info] NettyRpcEnvSuite: [info] - send a message locally (9 milliseconds) [info] - collectEdgesChainDirectionIn (381 milliseconds) [info] - send a message remotely (47 milliseconds) [info] - send a RpcEndpointRef (2 milliseconds) [info] - ask a message locally (2 milliseconds) [info] - ask a message remotely (60 milliseconds) [info] - ask a message timeout (50 milliseconds) [info] - onStart and onStop (1 millisecond) [info] - onError: error in onStart (1 millisecond) [info] - onError: error in onStop (1 millisecond) [info] - onError: error in receive (1 millisecond) [info] - self: call in onStart (1 millisecond) [info] - self: call in receive (6 milliseconds) [info] - self: call in onStop (1 millisecond) [info] - collectEdgesChainDirectionEither (406 milliseconds) [info] StronglyConnectedComponentsSuite: [info] - call receive in sequence (275 milliseconds) [info] - stop(RpcEndpointRef) reentrant (2 milliseconds) [info] - sendWithReply (1 millisecond) [info] - sendWithReply: remotely (50 milliseconds) [info] - sendWithReply: error (2 milliseconds) [info] - sendWithReply: remotely error (63 milliseconds) [info] - network events in sever RpcEnv when another RpcEnv is in server mode (116 milliseconds) [info] - network events in sever RpcEnv when another RpcEnv is in client mode (102 milliseconds) [info] - network events in client RpcEnv when another RpcEnv is in server mode (114 milliseconds) [info] - sendWithReply: unserializable error (68 milliseconds) [info] - port conflict (60 milliseconds) [info] - Island Strongly Connected Components (865 milliseconds) [info] - send with authentication (473 milliseconds) [info] - send with SASL encryption (128 milliseconds) [info] - send with AES encryption (154 milliseconds) [info] - ask with authentication (165 milliseconds) [info] - ask with SASL encryption (101 milliseconds) [info] - ask with AES encryption (102 milliseconds) [info] - construct RpcTimeout with conf property (2 milliseconds) [info] - ask a message timeout on Future using RpcTimeout (24 milliseconds) [info] - file server (119 milliseconds) [info] - SPARK-14699: RpcEnv.shutdown should not fire onDisconnected events (121 milliseconds) [info] - non-existent endpoint (2 milliseconds) [info] - advertise address different from bind address (53 milliseconds) [info] - RequestMessage serialization (10 milliseconds) Exception in thread "dispatcher-event-loop-0" java.lang.StackOverflowError at org.apache.spark.rpc.netty.NettyRpcEnvSuite$$anonfun$5$$anon$1$$anonfun$receiveAndReply$1.applyOrElse(NettyRpcEnvSuite.scala:113) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:105) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:215) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:101) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:221) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Exception in thread "dispatcher-event-loop-1" java.lang.StackOverflowError at org.apache.spark.rpc.netty.NettyRpcEnvSuite$$anonfun$5$$anon$1$$anonfun$receiveAndReply$1.applyOrElse(NettyRpcEnvSuite.scala:113) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:105) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:215) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:101) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:221) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) [info] - StackOverflowError should be sent back and Dispatcher should survive (95 milliseconds) [info] JsonProtocolSuite: [info] - SparkListenerEvent (249 milliseconds) [info] - Dependent Classes (20 milliseconds) [info] - ExceptionFailure backward compatibility: full stack trace (2 milliseconds) [info] - StageInfo backward compatibility (details, accumulables) (1 millisecond) [info] - InputMetrics backward compatibility (2 milliseconds) [info] - Input/Output records backwards compatibility (1 millisecond) [info] - Shuffle Read/Write records backwards compatibility (1 millisecond) [info] - OutputMetrics backward compatibility (2 milliseconds) [info] - BlockManager events backward compatibility (1 millisecond) [info] - FetchFailed backwards compatibility (1 millisecond) [info] - ShuffleReadMetrics: Local bytes read backwards compatibility (1 millisecond) [info] - SparkListenerApplicationStart backwards compatibility (1 millisecond) [info] - ExecutorLostFailure backward compatibility (1 millisecond) [info] - SparkListenerJobStart backward compatibility (3 milliseconds) [info] - SparkListenerJobStart and SparkListenerJobEnd backward compatibility (3 milliseconds) [info] - RDDInfo backward compatibility (scope, parent IDs, callsite) (1 millisecond) [info] - StageInfo backward compatibility (parent IDs) (1 millisecond) [info] - TaskCommitDenied backward compatibility (1 millisecond) [info] - AccumulableInfo backward compatibility (1 millisecond) [info] - ExceptionFailure backward compatibility: accumulator updates (9 milliseconds) [info] - AccumulableInfo value de/serialization (2 milliseconds) [info] - SPARK-31923: unexpected value type of internal accumulator (1 millisecond) [info] RPackageUtilsSuite: [info] - pick which jars to unpack using the manifest (297 milliseconds) [info] - Cycle Strongly Connected Components (2 seconds, 862 milliseconds) [info] - build an R package from a jar end to end (2 seconds, 967 milliseconds) [info] - 2 Cycle Strongly Connected Components (2 seconds, 389 milliseconds) [info] VertexPartitionSuite: [info] - isDefined, filter (5 milliseconds) [info] - map (1 millisecond) [info] - diff (1 millisecond) [info] - leftJoin (2 milliseconds) [info] - innerJoin (3 milliseconds) [info] - createUsingIndex (0 milliseconds) [info] - innerJoinKeepLeft (1 millisecond) [info] - aggregateUsingIndex (1 millisecond) [info] - jars that don't exist are skipped and print warning (356 milliseconds) [info] - reindex (2 milliseconds) [info] - serialization (10 milliseconds) [info] GraphLoaderSuite: [info] - faulty R package shows documentation (363 milliseconds) [info] - GraphLoader.edgeListFile (359 milliseconds) [info] TriangleCountSuite: [info] - jars without manifest return false (113 milliseconds) [info] - SparkR zipping works properly (7 milliseconds) [info] TopologyMapperSuite: [info] - File based Topology Mapper (6 milliseconds) [info] EventLoggingListenerSuite: [info] - Verify log file exist (43 milliseconds) [info] - Basic event logging (84 milliseconds) [info] - Basic event logging with compression (219 milliseconds) [info] - Count a single triangle (522 milliseconds) [info] - recover from repeated node failures during shuffle-map (8 seconds, 14 milliseconds) [info] - Count two triangles (581 milliseconds) [info] - Count two triangles with bi-directed edges (529 milliseconds) [info] - Count a single triangle with duplicate edges (654 milliseconds) [info] GraphGeneratorsSuite: [info] - GraphGenerators.generateRandomEdges (3 milliseconds) [info] - GraphGenerators.sampleLogNormal (6 milliseconds) [info] - GraphGenerators.logNormalGraph (408 milliseconds) [info] - SPARK-5064 GraphGenerators.rmatGraph numEdges upper bound (152 milliseconds) [info] SVDPlusPlusSuite: [info] - End-to-end event logging (3 seconds, 94 milliseconds) [info] - Test SVD++ with mean square error on training set (1 second, 112 milliseconds) [info] - Test SVD++ with no edges (184 milliseconds) [info] FailureTrackerSuite: [info] - failures expire if validity interval is set (223 milliseconds) [info] - failures never expire if validity interval is not set (-1) (4 milliseconds) [info] ClientSuite: [info] - default Yarn application classpath (51 milliseconds) [info] - default MR application classpath (1 millisecond) [info] - resultant classpath for an application that defines a classpath for YARN (316 milliseconds) [info] - resultant classpath for an application that defines a classpath for MR (23 milliseconds) [info] - resultant classpath for an application that defines both classpaths, YARN and MR (16 milliseconds) [info] - Local jar URIs (313 milliseconds) [info] - Jar path propagation through SparkConf (454 milliseconds) [info] - Cluster path translation (43 milliseconds) [info] - configuration and args propagate through createApplicationSubmissionContext (103 milliseconds) [info] - spark.yarn.jars with multiple paths and globs (239 milliseconds) [info] - distribute jars archive (131 milliseconds) [info] - distribute archive multiple times (524 milliseconds) [info] - distribute local spark jars (127 milliseconds) [info] - ignore same name jars (134 milliseconds) [info] - SPARK-31582 Being able to not populate Hadoop classpath (53 milliseconds) [info] - files URI match test1 (1 millisecond) [info] - files URI match test2 (1 millisecond) [info] - files URI match test3 (1 millisecond) [info] - wasb URI match test (1 millisecond) [info] - hdfs URI match test (0 milliseconds) [info] - files URI unmatch test1 (1 millisecond) [info] - files URI unmatch test2 (0 milliseconds) [info] - files URI unmatch test3 (1 millisecond) [info] - wasb URI unmatch test1 (1 millisecond) [info] - wasb URI unmatch test2 (1 millisecond) [info] - s3 URI unmatch test (1 millisecond) [info] - hdfs URI unmatch test1 (0 milliseconds) [info] - hdfs URI unmatch test2 (0 milliseconds) [info] YarnAllocatorSuite: [info] - single container allocated (135 milliseconds) [info] - container should not be created if requested number if met (46 milliseconds) [info] - some containers allocated (37 milliseconds) [info] - receive more containers than requested (47 milliseconds) [info] - decrease total requested executors (38 milliseconds) [info] - decrease total requested executors to less than currently running (46 milliseconds) [info] - kill executors (58 milliseconds) [info] - kill same executor multiple times (32 milliseconds) [info] - process same completed container multiple times (37 milliseconds) [info] - lost executor removed from backend (57 milliseconds) [info] - blacklisted nodes reflected in amClient requests (45 milliseconds) [info] - memory exceeded diagnostic regexes (1 millisecond) [info] - window based failure executor counting (38 milliseconds) [info] - SPARK-26269: YarnAllocator should have same blacklist behaviour with YARN (68 milliseconds) [info] ClientDistributedCacheManagerSuite: [info] - test getFileStatus empty (22 milliseconds) [info] - test getFileStatus cached (1 millisecond) [info] - test addResource (3 milliseconds) [info] - test addResource link null (1 millisecond) [info] - test addResource appmaster only (1 millisecond) [info] - test addResource archive (1 millisecond) [info] ExtensionServiceIntegrationSuite: [info] - Instantiate (9 milliseconds) [info] - Contains SimpleExtensionService Service (3 milliseconds) [info] YarnAllocatorBlacklistTrackerSuite: [info] - expiring its own blacklisted nodes (2 milliseconds) [info] - not handling the expiry of scheduler blacklisted nodes (1 millisecond) [info] - combining scheduler and allocation blacklist (2 milliseconds) [info] - blacklist all available nodes (2 milliseconds) [info] YarnClusterSuite: [info] - End-to-end event logging with compression (11 seconds, 869 milliseconds) [info] - Event logging with password redaction (32 milliseconds) [info] - Log overwriting (103 milliseconds) [info] - Event log name (1 millisecond) [info] FileCommitProtocolInstantiationSuite: [info] - Dynamic partitions require appropriate constructor (1 millisecond) [info] - Standard partitions work with classic constructor (1 millisecond) [info] - Three arg constructors have priority (1 millisecond) [info] - Three arg constructors have priority when dynamic (0 milliseconds) [info] - The protocol must be of the correct class (1 millisecond) [info] - If there is no matching constructor, class hierarchy is irrelevant (1 millisecond) [info] JobCancellationSuite: [info] - local mode, FIFO scheduler (107 milliseconds) [info] - local mode, fair scheduler (141 milliseconds) [info] - cluster mode, FIFO scheduler (3 seconds, 167 milliseconds) [info] - cluster mode, fair scheduler (3 seconds, 10 milliseconds) [info] - do not put partially executed partitions into cache (77 milliseconds) [info] - job group (90 milliseconds) [info] - inherited job group (SPARK-6629) (85 milliseconds) [info] - job group with interruption (91 milliseconds) [info] - run Spark in yarn-client mode (19 seconds, 131 milliseconds) [info] - recover from repeated node failures during shuffle-reduce (39 seconds, 503 milliseconds) [info] - task reaper kills JVM if killed tasks keep running for too long (17 seconds, 470 milliseconds) [info] - basic usage (2 minutes, 4 seconds) [info] - recover from node failures with replication (10 seconds, 305 milliseconds) [info] - task reaper will not kill JVM if spark.task.killTimeout == -1 (14 seconds, 32 milliseconds) [info] - two jobs sharing the same stage (118 milliseconds) [info] - unpersist RDDs (4 seconds, 115 milliseconds) [info] - interruptible iterator of shuffle reader (211 milliseconds) [info] TaskContextSuite: [info] - run Spark in yarn-cluster mode (23 seconds, 31 milliseconds) [info] - provide metrics sources (141 milliseconds) [info] - calls TaskCompletionListener after failure (178 milliseconds) [info] - calls TaskFailureListeners after failure (63 milliseconds) [info] - all TaskCompletionListeners should be called even if some fail (6 milliseconds) [info] - all TaskFailureListeners should be called even if some fail (5 milliseconds) [info] - TaskContext.attemptNumber should return attempt number, not task id (SPARK-4014) (73 milliseconds) [info] - TaskContext.stageAttemptNumber getter (548 milliseconds) [info] - accumulators are updated on exception failures (122 milliseconds) [info] - failed tasks collect only accumulators whose values count during failures (58 milliseconds) [info] - only updated internal accumulators will be sent back to driver (66 milliseconds) [info] - localProperties are propagated to executors correctly (63 milliseconds) [info] - immediately call a completion listener if the context is completed (1 millisecond) [info] - immediately call a failure listener if the context has failed (1 millisecond) [info] - TaskCompletionListenerException.getMessage should include previousError (1 millisecond) [info] - all TaskCompletionListeners should be called even if some fail or a task (6 milliseconds) [info] DAGSchedulerSuite: [info] - [SPARK-3353] parent stage should have lower stage id (68 milliseconds) [info] - [SPARK-13902] Ensure no duplicate stages are created (44 milliseconds) [info] - All shuffle files on the slave should be cleaned up when slave lost (118 milliseconds) [info] - SPARK-32003: All shuffle files for executor should be cleaned up on fetch failure (120 milliseconds) [info] - zero split job (4 milliseconds) [info] - run trivial job (4 milliseconds) [info] - run trivial job w/ dependency (3 milliseconds) [info] - equals and hashCode AccumulableInfo (1 millisecond) [info] - cache location preferences w/ dependency (24 milliseconds) [info] - regression test for getCacheLocs (2 milliseconds) [info] - reference partitions inside a task (2 seconds, 979 milliseconds) [info] - getMissingParentStages should consider all ancestor RDDs' cache statuses (4 milliseconds) [info] - avoid exponential blowup when getting preferred locs list (51 milliseconds) [info] - unserializable task (17 milliseconds) [info] - trivial job failure (14 milliseconds) [info] - trivial job cancellation (4 milliseconds) [info] - job cancellation no-kill backend (6 milliseconds) [info] - run trivial shuffle (9 milliseconds) [info] - run trivial shuffle with fetch failure (20 milliseconds) [info] - shuffle files not lost when slave lost with shuffle service (124 milliseconds) [info] - shuffle files lost when worker lost with shuffle service (134 milliseconds) [info] ExecutorPodsSnapshotSuite: [info] - shuffle files lost when worker lost without shuffle service (100 milliseconds) [info] - States are interpreted correctly from pod metadata. (208 milliseconds) [info] - Updates add new pods for non-matching ids and edit existing pods for matching ids (5 milliseconds) [info] EnvSecretsFeatureStepSuite: [info] - sets up all keyRefs (22 milliseconds) [info] RDriverFeatureStepSuite: [info] - shuffle files not lost when executor failure with shuffle service (100 milliseconds) [info] - R Step modifies container correctly (83 milliseconds) [info] ExecutorPodsPollingSnapshotSourceSuite: [info] - shuffle files lost when executor failure without shuffle service (106 milliseconds) [info] - Items returned by the API should be pushed to the event queue (15 milliseconds) [info] BasicExecutorFeatureStepSuite: [info] - basic executor pod has reasonable defaults (34 milliseconds) [info] - Single stage fetch failure should not abort the stage. (33 milliseconds) [info] - executor pod hostnames get truncated to 63 characters (2 milliseconds) [info] - classpath and extra java options get translated into environment variables (4 milliseconds) [info] - test executor pyspark memory (4 milliseconds) [info] DriverKubernetesCredentialsFeatureStepSuite: [info] - Don't set any credentials (10 milliseconds) [info] - Only set credentials that are manually mounted. (3 milliseconds) [info] - Multiple consecutive stage fetch failures should lead to job being aborted. (27 milliseconds) [info] - Mount credentials from the submission client as a secret. (71 milliseconds) [info] ClientSuite: [info] - The client should configure the pod using the builder. (101 milliseconds) [info] - Failures in different stages should not trigger an overall abort (48 milliseconds) [info] - The client should create Kubernetes resources (4 milliseconds) [info] - Waiting for app completion should stall on the watcher (3 milliseconds) [info] DriverServiceFeatureStepSuite: [info] - Headless service has a port for the driver RPC and the block manager. (18 milliseconds) [info] - Hostname and ports are set according to the service name. (1 millisecond) [info] - Ports should resolve to defaults in SparkConf and in the service. (1 millisecond) [info] - Long prefixes should switch to using a generated name. (2 milliseconds) [info] - Disallow bind address and driver host to be set explicitly. (0 milliseconds) [info] KubernetesDriverBuilderSuite: [info] - Apply fundamental steps all the time. (9 milliseconds) [info] - Apply secrets step if secrets are present. (3 milliseconds) [info] - Apply Java step if main resource is none. (3 milliseconds) [info] - Apply Python step if main resource is python. (4 milliseconds) [info] - Apply volumes step if mounts are present. (4 milliseconds) [info] - Apply R step if main resource is R. (2 milliseconds) [info] ExecutorPodsAllocatorSuite: [info] - Initially request executors in batches. Do not request another batch if the first has not finished. (29 milliseconds) [info] - Non-consecutive stage failures don't trigger abort (82 milliseconds) [info] - Request executors in batches. Allow another batch to be requested if all pending executors start running. (14 milliseconds) [info] - When a current batch reaches error states immediately, re-request them on the next batch. (11 milliseconds) [info] - When an executor is requested but the API does not report it in a reasonable time, retry requesting that executor. (5 milliseconds) [info] KubernetesClusterSchedulerBackendSuite: [info] - trivial shuffle with multiple fetch failures (9 milliseconds) [info] - Start all components (4 milliseconds) [info] - Stop all components (9 milliseconds) [info] - Remove executor (2 milliseconds) [info] - Kill executors (7 milliseconds) [info] - Request total executors (2 milliseconds) [info] KubernetesConfSuite: [info] - Basic driver translated fields. (4 milliseconds) [info] - Creating driver conf with and without the main app jar influences spark.jars (5 milliseconds) [info] - Creating driver conf with a python primary file (2 milliseconds) [info] - Creating driver conf with a r primary file (1 millisecond) [info] - Testing explicit setting of memory overhead on non-JVM tasks (2 milliseconds) [info] - Resolve driver labels, annotations, secret mount paths, envs, and memory overhead (4 milliseconds) [info] - Basic executor translated fields. (0 milliseconds) [info] - Image pull secrets. (1 millisecond) [info] - Set executor labels, annotations, and secrets (2 milliseconds) [info] KubernetesVolumeUtilsSuite: [info] - Retry all the tasks on a resubmitted attempt of a barrier stage caused by FetchFailure (24 milliseconds) [info] - Parses hostPath volumes correctly (4 milliseconds) [info] - Parses persistentVolumeClaim volumes correctly (4 milliseconds) [info] - Parses emptyDir volumes correctly (1 millisecond) [info] - Parses emptyDir volume options can be optional (0 milliseconds) [info] - Defaults optional readOnly to false (1 millisecond) [info] - Gracefully fails on missing mount key (1 millisecond) [info] - Gracefully fails on missing option key (1 millisecond) [info] BasicDriverFeatureStepSuite: [info] - Check the pod respects all configurations from the user. (9 milliseconds) [info] - Check appropriate entrypoint rerouting for various bindings (2 milliseconds) [info] - Additional system properties resolve jars and set cluster-mode confs. (2 milliseconds) [info] ExecutorPodsSnapshotsStoreSuite: [info] - Subscribers get notified of events periodically. (7 milliseconds) [info] - Even without sending events, initially receive an empty buffer. (1 millisecond) [info] - Replacing the snapshot passes the new snapshot to subscribers. (2 milliseconds) [info] MountVolumesFeatureStepSuite: [info] - Mounts hostPath volumes (5 milliseconds) [info] - Mounts pesistentVolumeClaims (3 milliseconds) [info] - Mounts emptyDir (3 milliseconds) [info] - Mounts emptyDir with no options (1 millisecond) [info] - Mounts multiple volumes (1 millisecond) [info] MountSecretsFeatureStepSuite: [info] - mounts all given secrets (4 milliseconds) [info] ExecutorPodsLifecycleManagerSuite: [info] - Retry all the tasks on a resubmitted attempt of a barrier stage caused by TaskKilled (25 milliseconds) [info] - When an executor reaches error states immediately, remove from the scheduler backend. (10 milliseconds) [info] - Don't remove executors twice from Spark but remove from K8s repeatedly. (4 milliseconds) [info] - When the scheduler backend lists executor ids that aren't present in the cluster, remove those executors from Spark. (2 milliseconds) [info] JavaDriverFeatureStepSuite: [info] - Java Step modifies container correctly (2 milliseconds) [info] ExecutorPodsWatchSnapshotSourceSuite: [info] - Watch events should be pushed to the snapshots store as snapshot updates. (2 milliseconds) [info] LocalDirsFeatureStepSuite: [info] - Fail the job if a barrier ResultTask failed (17 milliseconds) [info] - Resolve to default local dir if neither env nor configuration are set (36 milliseconds) [info] - Use configured local dirs split on comma if provided. (1 millisecond) [info] PythonDriverFeatureStepSuite: [info] - Python Step modifies container correctly (8 milliseconds) [info] - Python Step testing empty pyfiles (2 milliseconds) [info] KubernetesExecutorBuilderSuite: [info] - Basic steps are consistently applied. (3 milliseconds) [info] - Apply secrets step if secrets are present. (1 millisecond) [info] - Apply volumes step if mounts are present. (1 millisecond) [info] - late fetch failures don't cause multiple concurrent attempts for the same map stage (12 milliseconds) [info] - extremely late fetch failures don't cause multiple concurrent attempts for the same stage (38 milliseconds) [info] - task events always posted in speculation / when stage is killed (25 milliseconds) [info] - ignore late map task completions (9 milliseconds) [info] - run shuffle with map stage failure (4 milliseconds) [info] - shuffle fetch failure in a reused shuffle dependency (17 milliseconds) [info] - don't submit stage until its dependencies map outputs are registered (SPARK-5259) (21 milliseconds) [info] - register map outputs correctly after ExecutorLost and task Resubmitted (9 milliseconds) [info] - failure of stage used by two jobs (9 milliseconds) [info] - stage used by two jobs, the first no longer active (SPARK-6880) (12 milliseconds) [info] ReceiverTrackerSuite: [info] - stage used by two jobs, some fetch failures, and the first job no longer active (SPARK-6880) (17 milliseconds) [info] - run trivial shuffle with out-of-band executor failure and retry (11 milliseconds) [info] - recursive shuffle failures (23 milliseconds) [info] - cached post-shuffle (18 milliseconds) [info] - misbehaved accumulator should not crash DAGScheduler and SparkContext (25 milliseconds) [info] - misbehaved accumulator should not impact other accumulators (14 milliseconds) [info] - misbehaved resultHandler should not crash DAGScheduler and SparkContext (19 milliseconds) [info] - getPartitions exceptions should not crash DAGScheduler and SparkContext (SPARK-8606) (14 milliseconds) [info] - getPreferredLocations errors should not crash DAGScheduler and SparkContext (SPARK-8606) (13 milliseconds) [info] - accumulator not calculated for resubmitted result stage (4 milliseconds) [info] - accumulator not calculated for resubmitted task in result stage (4 milliseconds) [info] - accumulators are updated on exception failures and task killed (3 milliseconds) [info] - reduce tasks should be placed locally with map output (8 milliseconds) [info] - reduce task locality preferences should only include machines with largest map outputs (13 milliseconds) [info] - stages with both narrow and shuffle dependencies use narrow ones for locality (8 milliseconds) [info] - Spark exceptions should include call site in stack trace (34 milliseconds) [info] - catch errors in event loop (4 milliseconds) [info] - simple map stage submission (15 milliseconds) [info] - map stage submission with reduce stage also depending on the data (9 milliseconds) [info] - map stage submission with fetch failure (20 milliseconds) [info] - map stage submission with multiple shared stages and failures (34 milliseconds) [info] - Trigger mapstage's job listener in submitMissingTasks (14 milliseconds) [info] - send rate update to receivers (2 seconds, 512 milliseconds) [info] - map stage submission with executor failure late map task completions (13 milliseconds) [info] - getShuffleDependencies correctly returns only direct shuffle parents (1 millisecond) [info] - should restart receiver after stopping it (927 milliseconds) [info] - SPARK-11063: TaskSetManager should use Receiver RDD's preferredLocations (594 milliseconds) [info] - SPARK-17644: After one stage is aborted for too many failed attempts, subsequent stagesstill behave correctly on fetch failures (1 second, 436 milliseconds) [info] - [SPARK-19263] DAGScheduler should not submit multiple active tasksets, even with late completions from earlier stage attempts (17 milliseconds) [info] - task end event should have updated accumulators (SPARK-20342) (148 milliseconds) [info] - get allocated executors (499 milliseconds) [info] RateLimitedOutputStreamSuite: [info] - Barrier task failures from the same stage attempt don't trigger multiple stage retries (10 milliseconds) [info] - Barrier task failures from a previous stage attempt don't trigger stage retry (9 milliseconds) [info] - SPARK-23207: retry all the succeeding stages when the map stage is indeterminate (14 milliseconds) [info] - SPARK-29042: Sampled RDD with unordered input should be indeterminate (6 milliseconds) [info] - SPARK-23207: cannot rollback a result stage (8 milliseconds) [info] - SPARK-23207: local checkpoint fail to rollback (checkpointed before) (35 milliseconds) [info] - SPARK-23207: local checkpoint fail to rollback (checkpointing now) (13 milliseconds) [info] - SPARK-23207: reliable checkpoint can avoid rollback (checkpointed before) (81 milliseconds) [info] - SPARK-23207: reliable checkpoint fail to rollback (checkpointing now) (21 milliseconds) [info] - SPARK-28699: abort stage if parent stage is indeterminate stage (9 milliseconds) [info] PrefixComparatorsSuite: [info] - String prefix comparator (134 milliseconds) [info] - Binary prefix comparator (9 milliseconds) [info] - double prefix comparator handles NaNs properly (0 milliseconds) [info] - double prefix comparator handles negative NaNs properly (1 millisecond) [info] - double prefix comparator handles other special values properly (1 millisecond) [info] MasterWebUISuite: [info] - kill application (285 milliseconds) [info] - kill driver (112 milliseconds) [info] SorterSuite: [info] - equivalent to Arrays.sort (52 milliseconds) [info] - KVArraySorter (101 milliseconds) [info] - write (4 seconds, 166 milliseconds) [info] RecurringTimerSuite: [info] - basic (5 milliseconds) [info] - SPARK-10224: call 'callback' after stopping (8 milliseconds) [info] InputStreamsSuite: Exception in thread "receiver-supervisor-future-0" java.lang.Error: java.lang.InterruptedException: sleep interrupted at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1155) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.InterruptedException: sleep interrupted at java.lang.Thread.sleep(Native Method) at org.apache.spark.streaming.receiver.ReceiverSupervisor$$anonfun$restartReceiver$1.apply$mcV$sp(ReceiverSupervisor.scala:196) at org.apache.spark.streaming.receiver.ReceiverSupervisor$$anonfun$restartReceiver$1.apply(ReceiverSupervisor.scala:189) at org.apache.spark.streaming.receiver.ReceiverSupervisor$$anonfun$restartReceiver$1.apply(ReceiverSupervisor.scala:189) at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24) at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ... 2 more [info] - socket input stream (700 milliseconds) [info] - socket input stream - no block in a batch (369 milliseconds) [info] - run Spark in yarn-client mode with different configurations, ensuring redaction (19 seconds, 29 milliseconds) [info] - binary records stream (6 seconds, 218 milliseconds) [info] - file input stream - newFilesOnly = true (508 milliseconds) [info] - file input stream - newFilesOnly = false (404 milliseconds) [info] - file input stream - wildcard (662 milliseconds) [info] - multi-thread receiver (2 seconds, 13 milliseconds) [info] - queue input stream - oneAtATime = true (1 second, 102 milliseconds) [info] - queue input stream - oneAtATime = false (2 seconds, 90 milliseconds) [info] - test track the number of input stream (100 milliseconds) [info] WriteAheadLogUtilsSuite: [info] - log selection and creation (50 milliseconds) [info] - wrap WriteAheadLog in BatchedWriteAheadLog when batching is enabled (6 milliseconds) [info] - batching is enabled by default in WriteAheadLog (1 millisecond) [info] - closeFileAfterWrite is disabled by default in WriteAheadLog (1 millisecond) [info] ReceiverSchedulingPolicySuite: [info] - rescheduleReceiver: empty executors (0 milliseconds) [info] - rescheduleReceiver: receiver preferredLocation (1 millisecond) [info] - rescheduleReceiver: return all idle executors if there are any idle executors (6 milliseconds) [info] - rescheduleReceiver: return all executors that have minimum weight if no idle executors (4 milliseconds) [info] - scheduleReceivers: schedule receivers evenly when there are more receivers than executors (3 milliseconds) [info] - scheduleReceivers: schedule receivers evenly when there are more executors than receivers (4 milliseconds) [info] - scheduleReceivers: schedule receivers evenly when the preferredLocations are even (7 milliseconds) [info] - scheduleReceivers: return empty if no receiver (1 millisecond) [info] - scheduleReceivers: return empty scheduled executors if no executors (1 millisecond) [info] PIDRateEstimatorSuite: [info] - the right estimator is created (13 milliseconds) [info] - estimator checks ranges (2 milliseconds) [info] - first estimate is None (2 milliseconds) [info] - second estimate is not None (2 milliseconds) [info] - no estimate when no time difference between successive calls (2 milliseconds) [info] - no estimate when no records in previous batch (1 millisecond) [info] - no estimate when there is no processing delay (0 milliseconds) [info] - estimate is never less than min rate (20 milliseconds) [info] - with no accumulated or positive error, |I| > 0, follow the processing speed (4 milliseconds) [info] - with no accumulated but some positive error, |I| > 0, follow the processing speed (3 milliseconds) [info] - with some accumulated and some positive error, |I| > 0, stay below the processing speed (19 milliseconds) [info] ReceivedBlockHandlerSuite: [info] - BlockManagerBasedBlockHandler - store blocks (728 milliseconds) [info] - BlockManagerBasedBlockHandler - handle errors in storing block (16 milliseconds) [info] - WriteAheadLogBasedBlockHandler - store blocks (786 milliseconds) [info] - WriteAheadLogBasedBlockHandler - handle errors in storing block (36 milliseconds) [info] - WriteAheadLogBasedBlockHandler - clean old blocks (139 milliseconds) [info] - Test Block - count messages (193 milliseconds) [info] - Test Block - isFullyConsumed (39 milliseconds) [info] - SPARK-5984 TimSort bug (18 seconds, 972 milliseconds) [info] - Sorter benchmark for key-value pairs !!! IGNORED !!! [info] - Sorter benchmark for primitive int array !!! IGNORED !!! [info] RandomSamplerSuite: [info] - utilities (8 milliseconds) [info] ReceivedBlockHandlerWithEncryptionSuite: [info] - sanity check medianKSD against references (97 milliseconds) [info] - bernoulli sampling (38 milliseconds) [info] - bernoulli sampling without iterator (37 milliseconds) [info] - bernoulli sampling with gap sampling optimization (76 milliseconds) [info] - bernoulli sampling (without iterator) with gap sampling optimization (83 milliseconds) [info] - bernoulli boundary cases (1 millisecond) [info] - bernoulli (without iterator) boundary cases (2 milliseconds) [info] - bernoulli data types (99 milliseconds) [info] - bernoulli clone (18 milliseconds) [info] - bernoulli set seed (34 milliseconds) [info] - BlockManagerBasedBlockHandler - store blocks (449 milliseconds) [info] - replacement sampling (53 milliseconds) [info] - replacement sampling without iterator (45 milliseconds) [info] - BlockManagerBasedBlockHandler - handle errors in storing block (25 milliseconds) [info] - replacement sampling with gap sampling (142 milliseconds) [info] - replacement sampling (without iterator) with gap sampling (168 milliseconds) [info] - replacement boundary cases (1 millisecond) [info] - replacement (without) boundary cases (1 millisecond) [info] - replacement data types (104 milliseconds) [info] - replacement clone (30 milliseconds) [info] - replacement set seed (73 milliseconds) [info] - bernoulli partitioning sampling (27 milliseconds) [info] - bernoulli partitioning sampling without iterator (26 milliseconds) [info] - bernoulli partitioning boundary cases (0 milliseconds) [info] - bernoulli partitioning (without iterator) boundary cases (3 milliseconds) [info] - bernoulli partitioning data (0 milliseconds) [info] - bernoulli partitioning clone (0 milliseconds) [info] PoolSuite: [info] - FIFO Scheduler Test (64 milliseconds) [info] - Fair Scheduler Test (86 milliseconds) [info] - Nested Pool Test (62 milliseconds) [info] - SPARK-17663: FairSchedulableBuilder sets default values for blank or invalid datas (7 milliseconds) [info] - FIFO scheduler uses root pool and not spark.scheduler.pool property (80 milliseconds) [info] - WriteAheadLogBasedBlockHandler - store blocks (897 milliseconds) [info] - FAIR Scheduler uses default pool when spark.scheduler.pool property is not set (144 milliseconds) [info] - WriteAheadLogBasedBlockHandler - handle errors in storing block (39 milliseconds) [info] - FAIR Scheduler creates a new pool when spark.scheduler.pool property points to a non-existent pool (110 milliseconds) [info] - Pool should throw IllegalArgumentException when schedulingMode is not supported (1 millisecond) [info] - Fair Scheduler should build fair scheduler when valid spark.scheduler.allocation.file property is set (69 milliseconds) [info] - WriteAheadLogBasedBlockHandler - clean old blocks (99 milliseconds) [info] - Fair Scheduler should use default file(fairscheduler.xml) if it exists in classpath and spark.scheduler.allocation.file property is not set (52 milliseconds) [info] - Fair Scheduler should throw FileNotFoundException when invalid spark.scheduler.allocation.file property is set (60 milliseconds) [info] DiskStoreSuite: [info] - reads of memory-mapped and non memory-mapped files are equivalent (36 milliseconds) [info] - block size tracking (32 milliseconds) [info] - blocks larger than 2gb (33 milliseconds) [info] - block data encryption (56 milliseconds) [info] BlockManagerReplicationSuite: [info] - Test Block - count messages (227 milliseconds) [info] - Test Block - isFullyConsumed (46 milliseconds) [info] InputInfoTrackerSuite: [info] - get peers with addition and removal of block managers (33 milliseconds) [info] - test report and get InputInfo from InputInfoTracker (1 millisecond) [info] - test cleanup InputInfo from InputInfoTracker (2 milliseconds) [info] JobGeneratorSuite: [info] - block replication - 2x replication (673 milliseconds) [info] - block replication - 3x replication (1 second, 355 milliseconds) [info] - run Spark in yarn-cluster mode with different configurations, ensuring redaction (19 seconds, 35 milliseconds) [info] - SPARK-6222: Do not clear received block data too soon (2 seconds, 720 milliseconds) [info] ReceivedBlockTrackerSuite: [info] - block addition, and block to batch allocation (5 milliseconds) [info] - block replication - mixed between 1x to 5x (2 seconds, 126 milliseconds) [info] - block replication - off-heap (371 milliseconds) [info] - block replication - 2x replication without peers (1 millisecond) [info] - block replication - replication failures (90 milliseconds) [info] - block replication - addition and deletion of block managers (325 milliseconds) [info] BlockManagerProactiveReplicationSuite: [info] - get peers with addition and removal of block managers (44 milliseconds) [info] - block replication - 2x replication (625 milliseconds) [info] - block replication - 3x replication (1 second, 296 milliseconds) [info] - block replication - mixed between 1x to 5x (2 seconds, 14 milliseconds) [info] - block replication - off-heap (300 milliseconds) [info] - block replication - 2x replication without peers (1 millisecond) [info] - block replication - replication failures (44 milliseconds) [info] - block replication - addition and deletion of block managers (241 milliseconds) [info] - proactive block replication - 2 replicas - 1 block manager deletions (121 milliseconds) [info] - proactive block replication - 3 replicas - 2 block manager deletions (163 milliseconds) [info] - proactive block replication - 4 replicas - 3 block manager deletions (145 milliseconds) [info] - proactive block replication - 5 replicas - 4 block manager deletions (481 milliseconds) [info] BlockManagerBasicStrategyReplicationSuite: [info] - get peers with addition and removal of block managers (24 milliseconds) [info] - block replication - 2x replication (547 milliseconds) [info] - block addition, and block to batch allocation with many blocks (12 seconds, 514 milliseconds) [info] - recovery with write ahead logs should remove only allocated blocks from received queue (19 milliseconds) [info] - block replication - 3x replication (959 milliseconds) [info] - block allocation to batch should not loose blocks from received queue (214 milliseconds) [info] - recovery and cleanup with write ahead logs (41 milliseconds) [info] - disable write ahead log when checkpoint directory is not set (1 millisecond) [info] - parallel file deletion in FileBasedWriteAheadLog is robust to deletion error (22 milliseconds) [info] WindowOperationsSuite: [info] - window - basic window (509 milliseconds) [info] - window - tumbling window (356 milliseconds) [info] - window - larger window (580 milliseconds) [info] - block replication - mixed between 1x to 5x (1 second, 807 milliseconds) [info] - window - non-overlapping window (363 milliseconds) [info] - window - persistence level (92 milliseconds) [info] - block replication - off-heap (260 milliseconds) [info] - block replication - 2x replication without peers (0 milliseconds) [info] - reduceByKeyAndWindow - basic reduction (335 milliseconds) [info] - block replication - replication failures (50 milliseconds) [info] - reduceByKeyAndWindow - key already in window and new value added into window (305 milliseconds) [info] - block replication - addition and deletion of block managers (267 milliseconds) [info] FlatmapIteratorSuite: [info] - reduceByKeyAndWindow - new key added into window (303 milliseconds) [info] - Flatmap Iterator to Disk (99 milliseconds) [info] - Flatmap Iterator to Memory (72 milliseconds) [info] - Serializer Reset (104 milliseconds) [info] RDDSuite: [info] - reduceByKeyAndWindow - key removed from window (535 milliseconds) [info] - reduceByKeyAndWindow - larger slide time (419 milliseconds) [info] - basic operations (491 milliseconds) [info] - serialization (2 milliseconds) [info] - countApproxDistinct (73 milliseconds) [info] - SparkContext.union (34 milliseconds) [info] - SparkContext.union parallel partition listing (68 milliseconds) [info] - SparkContext.union creates UnionRDD if at least one RDD has no partitioner (3 milliseconds) [info] - SparkContext.union creates PartitionAwareUnionRDD if all RDDs have partitioners (4 milliseconds) [info] - PartitionAwareUnionRDD raises exception if at least one RDD has no partitioner (2 milliseconds) [info] - SPARK-23778: empty RDD in union should not produce a UnionRDD (5 milliseconds) [info] - partitioner aware union (211 milliseconds) [info] - UnionRDD partition serialized size should be small (4 milliseconds) [info] - fold (10 milliseconds) [info] - fold with op modifying first arg (10 milliseconds) [info] - aggregate (13 milliseconds) [info] - reduceByKeyAndWindow - big test (634 milliseconds) [info] - reduceByKeyAndWindow with inverse function - basic reduction (318 milliseconds) [info] - treeAggregate (544 milliseconds) [info] - reduceByKeyAndWindow with inverse function - key already in window and new value added into window (316 milliseconds) [info] - treeAggregate with ops modifying first args (475 milliseconds) [info] - reduceByKeyAndWindow with inverse function - new key added into window (463 milliseconds) [info] - treeReduce (265 milliseconds) [info] - basic caching (28 milliseconds) [info] - caching with failures (16 milliseconds) [info] - empty RDD (141 milliseconds) [info] - reduceByKeyAndWindow with inverse function - key removed from window (350 milliseconds) [info] - yarn-cluster should respect conf overrides in SparkHadoopUtil (SPARK-16414, SPARK-23630) (19 seconds, 34 milliseconds) [info] - reduceByKeyAndWindow with inverse function - larger slide time (340 milliseconds) [info] - repartitioned RDDs (654 milliseconds) [info] - reduceByKeyAndWindow with inverse function - big test (503 milliseconds) [info] - reduceByKeyAndWindow with inverse and filter functions - big test (507 milliseconds) [info] - groupByKeyAndWindow (517 milliseconds) [info] - countByWindow (373 milliseconds) [info] - countByValueAndWindow (310 milliseconds) [info] StreamingListenerSuite: [info] - batch info reporting (609 milliseconds) [info] - receiver info reporting (168 milliseconds) [info] - output operation reporting (1 second, 71 milliseconds) [info] - don't call ssc.stop in listener (985 milliseconds) [info] - onBatchCompleted with successful batch (1 second, 1 millisecond) [info] - onBatchCompleted with failed batch and one failed job (1 second, 1 millisecond) [info] - repartitioned RDDs perform load balancing (7 seconds, 525 milliseconds) [info] - coalesced RDDs (149 milliseconds) [info] - coalesced RDDs with locality (42 milliseconds) [info] - coalesced RDDs with partial locality (31 milliseconds) [info] - onBatchCompleted with failed batch and multiple failed jobs (1 second, 1 millisecond) [info] - StreamingListener receives no events after stopping StreamingListenerBus (399 milliseconds) [info] ReceiverInputDStreamSuite: [info] - Without WAL enabled: createBlockRDD creates empty BlockRDD when no block info (61 milliseconds) [info] - Without WAL enabled: createBlockRDD creates correct BlockRDD with block info (77 milliseconds) [info] - Without WAL enabled: createBlockRDD filters non-existent blocks before creating BlockRDD (68 milliseconds) [info] - With WAL enabled: createBlockRDD creates empty WALBackedBlockRDD when no block info (69 milliseconds) [info] - With WAL enabled: createBlockRDD creates correct WALBackedBlockRDD with all block info having WAL info (65 milliseconds) [info] - With WAL enabled: createBlockRDD creates BlockRDD when some block info don't have WAL info (75 milliseconds) [info] WriteAheadLogBackedBlockRDDSuite: [info] - coalesced RDDs with locality, large scale (10K partitions) (1 second, 135 milliseconds) [info] - Read data available in both block manager and write ahead log (86 milliseconds) [info] - Read data available only in block manager, not in write ahead log (48 milliseconds) [info] - Read data available only in write ahead log, not in block manager (51 milliseconds) [info] - Read data with partially available in block manager, and rest in write ahead log (52 milliseconds) [info] - Test isBlockValid skips block fetching from BlockManager (110 milliseconds) [info] - Test whether RDD is valid after removing blocks from block manager (105 milliseconds) [info] - coalesced RDDs with partial locality, large scale (10K partitions) (511 milliseconds) [info] - coalesced RDDs with locality, fail first pass (16 milliseconds) [info] - Test storing of blocks recovered from write ahead log back into block manager (100 milliseconds) [info] - zipped RDDs (31 milliseconds) Exception in thread "block-manager-slave-async-thread-pool-3" Exception in thread "block-manager-slave-async-thread-pool-4" java.lang.Error: java.lang.InterruptedException at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1155) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:502) at org.apache.spark.storage.BlockInfoManager.lockForWriting(BlockInfoManager.scala:236) at org.apache.spark.storage.BlockManager.removeBlock(BlockManager.scala:1571) at org.apache.spark.storage.BlockManagerSlaveEndpoint$$anonfun$receiveAndReply$1$$anonfun$applyOrElse$1.apply$mcZ$sp(BlockManagerSlaveEndpoint.scala:47) at org.apache.spark.storage.BlockManagerSlaveEndpoint$$anonfun$receiveAndReply$1$$anonfun$applyOrElse$1.apply(BlockManagerSlaveEndpoint.scala:46) at org.apache.spark.storage.BlockManagerSlaveEndpoint$$anonfun$receiveAndReply$1$$anonfun$applyOrElse$1.apply(BlockManagerSlaveEndpoint.scala:46) at org.apache.spark.storage.BlockManagerSlaveEndpoint$$anonfun$1.apply(BlockManagerSlaveEndpoint.scala:86) at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24) at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ... 2 more java.lang.Error: java.lang.InterruptedException at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1155) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:502) at org.apache.spark.storage.BlockInfoManager.lockForWriting(BlockInfoManager.scala:236) at org.apache.spark.storage.BlockManager.removeBlock(BlockManager.scala:1571) at org.apache.spark.storage.BlockManagerSlaveEndpoint$$anonfun$receiveAndReply$1$$anonfun$applyOrElse$1.apply$mcZ$sp(BlockManagerSlaveEndpoint.scala:47) at org.apache.spark.storage.BlockManagerSlaveEndpoint$$anonfun$receiveAndReply$1$$anonfun$applyOrElse$1.apply(BlockManagerSlaveEndpoint.scala:46) at org.apache.spark.storage.BlockManagerSlaveEndpoint$$anonfun$receiveAndReply$1$$anonfun$applyOrElse$1.apply(BlockManagerSlaveEndpoint.scala:46) at org.apache.spark.storage.BlockManagerSlaveEndpoint$$anonfun$1.apply(BlockManagerSlaveEndpoint.scala:86) at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24) at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ... 2 more java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@34f4e8a6 rejected from java.util.concurrent.ThreadPoolExecutor@3142f551[Shutting down, pool size = 3, active threads = 3, queued tasks = 0, completed tasks = 2] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136) at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44) at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252) at scala.concurrent.Promise$class.complete(Promise.scala:55) at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157) at scala.concurrent.Promise$class.failure(Promise.scala:104) at scala.concurrent.impl.Promise$DefaultPromise.failure(Promise.scala:157) at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:194) at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:192) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36) at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63) at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78) at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55) at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55) at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72) at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54) at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601) at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106) at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599) at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44) at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252) at scala.concurrent.Promise$class.complete(Promise.scala:55) at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157) at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@3f851cd1 rejected from java.util.concurrent.ThreadPoolExecutor@3142f551[Shutting down, pool size = 3, active threads = 3, queued tasks = 0, completed tasks = 2] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136) at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44) at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252) at scala.concurrent.Promise$class.complete(Promise.scala:55) at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157) at scala.concurrent.Promise$class.failure(Promise.scala:104) at scala.concurrent.impl.Promise$DefaultPromise.failure(Promise.scala:157) at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:194) at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:192) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36) at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63) at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78) at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55) at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55) at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72) at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54) at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601) at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106) at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599) at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44) at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252) at scala.concurrent.Promise$class.complete(Promise.scala:55) at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157) at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@190e3da3 rejected from java.util.concurrent.ThreadPoolExecutor@3142f551[Shutting down, pool size = 3, active threads = 3, queued tasks = 0, completed tasks = 2] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136) at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44) at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252) at scala.concurrent.Promise$class.complete(Promise.scala:55) at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157) at scala.concurrent.Promise$class.failure(Promise.scala:104) at scala.concurrent.impl.Promise$DefaultPromise.failure(Promise.scala:157) at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:194) at scala.concurrent.Future$$anonfun$failed$1.apply(Future.scala:192) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36) at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63) at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78) at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55) at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55) at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72) at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54) at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601) at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106) at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599) at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44) at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252) at scala.concurrent.Promise$class.complete(Promise.scala:55) at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157) at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@707fff6a rejected from java.util.concurrent.ThreadPoolExecutor@3142f551[Shutting down, pool size = 3, active threads = 3, queued tasks = 0, completed tasks = 2] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136) at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44) at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252) at scala.concurrent.Promise$class.complete(Promise.scala:55) at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157) at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@3e0c83d6 rejected from java.util.concurrent.ThreadPoolExecutor@3142f551[Shutting down, pool size = 3, active threads = 3, queued tasks = 0, completed tasks = 2] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136) at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44) at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252) at scala.concurrent.Promise$class.complete(Promise.scala:55) at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157) at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@7ded357a rejected from java.util.concurrent.ThreadPoolExecutor@3142f551[Shutting down, pool size = 3, active threads = 3, queued tasks = 0, completed tasks = 2] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136) at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44) at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252) at scala.concurrent.Promise$class.complete(Promise.scala:55) at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:157) at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:23) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) [info] - partition pruning (13 milliseconds) [info] - read data in block manager and WAL with encryption on (143 milliseconds) [info] TimeSuite: [info] - less (0 milliseconds) [info] - lessEq (0 milliseconds) [info] - greater (0 milliseconds) [info] - greaterEq (1 millisecond) [info] - plus (1 millisecond) [info] - minus Time (1 millisecond) [info] - minus Duration (0 milliseconds) [info] - floor (0 milliseconds) [info] - isMultipleOf (1 millisecond) [info] - min (0 milliseconds) [info] - max (0 milliseconds) [info] - until (2 milliseconds) [info] - to (1 millisecond) [info] DStreamScopeSuite: [info] - dstream without scope (1 millisecond) [info] - input dstream without scope (3 milliseconds) [info] - scoping simple operations (10 milliseconds) [info] - scoping nested operations (21 milliseconds) [info] - transform should allow RDD operations to be captured in scopes (13 milliseconds) [info] - foreachRDD should allow RDD operations to be captured in scope (21 milliseconds) [info] ReceiverSuite: [info] - receiver life cycle (342 milliseconds) [info] - block generator throttling !!! IGNORED !!! [info] - collect large number of empty partitions (6 seconds, 515 milliseconds) [info] - take (1 second, 274 milliseconds) [info] - top with predefined ordering (69 milliseconds) [info] - top with custom ordering (8 milliseconds) [info] - takeOrdered with predefined ordering (7 milliseconds) [info] - takeOrdered with limit 0 (0 milliseconds) [info] - takeOrdered with custom ordering (7 milliseconds) [info] - isEmpty (56 milliseconds) [info] - sample preserves partitioner (2 milliseconds) [info] - run Spark in yarn-client mode with additional jar (18 seconds, 32 milliseconds) [info] - write ahead log - generating and cleaning (14 seconds, 321 milliseconds) [info] StateMapSuite: [info] - EmptyStateMap (1 millisecond) [info] - OpenHashMapBasedStateMap - put, get, getByTime, getAll, remove (10 milliseconds) [info] - OpenHashMapBasedStateMap - put, get, getByTime, getAll, remove with copy (1 millisecond) [info] - OpenHashMapBasedStateMap - serializing and deserializing (62 milliseconds) [info] - OpenHashMapBasedStateMap - serializing and deserializing with compaction (8 milliseconds) [info] - OpenHashMapBasedStateMap - all possible sequences of operations with copies (6 seconds, 689 milliseconds) [info] - OpenHashMapBasedStateMap - serializing and deserializing with KryoSerializable states (14 milliseconds) [info] - EmptyStateMap - serializing and deserializing (19 milliseconds) [info] - MapWithStateRDDRecord - serializing and deserializing with KryoSerializable states (17 milliseconds) [info] UIUtilsSuite: [info] - shortTimeUnitString (0 milliseconds) [info] - normalizeDuration (4 milliseconds) [info] - convertToTimeUnit (1 millisecond) [info] - formatBatchTime (1 millisecond) [info] DurationSuite: [info] - less (0 milliseconds) [info] - lessEq (0 milliseconds) [info] - greater (1 millisecond) [info] - greaterEq (1 millisecond) [info] - plus (1 millisecond) [info] - minus (1 millisecond) [info] - times (0 milliseconds) [info] - div (0 milliseconds) [info] - isMultipleOf (1 millisecond) [info] - min (0 milliseconds) [info] - max (0 milliseconds) [info] - isZero (0 milliseconds) [info] - Milliseconds (0 milliseconds) [info] - Seconds (1 millisecond) [info] - Minutes (1 millisecond) [info] MapWithStateRDDSuite: [info] - creation from pair RDD (355 milliseconds) [info] - updating state and generating mapped data in MapWithStateRDDRecord (5 milliseconds) [info] - states generated by MapWithStateRDD (1 second, 114 milliseconds) [info] - checkpointing (1 second, 255 milliseconds) [info] - takeSample (17 seconds, 28 milliseconds) [info] - takeSample from an empty rdd (7 milliseconds) [info] - checkpointing empty state RDD (437 milliseconds) [info] DStreamClosureSuite: [info] - randomSplit (232 milliseconds) [info] - runJob on an invalid partition (4 milliseconds) [info] - sort an empty RDD (32 milliseconds) [info] - user provided closures are actually cleaned (46 milliseconds) [info] UISeleniumSuite: [info] - sortByKey (149 milliseconds) [info] - sortByKey ascending parameter (84 milliseconds) [info] - sortByKey with explicit ordering (59 milliseconds) [info] - repartitionAndSortWithinPartitions (49 milliseconds) [info] - cartesian on empty RDD (10 milliseconds) [info] - cartesian on non-empty RDDs (35 milliseconds) [info] - intersection (52 milliseconds) [info] - intersection strips duplicates in an input (45 milliseconds) [info] - zipWithIndex (26 milliseconds) [info] - zipWithIndex with a single partition (8 milliseconds) [info] - zipWithIndex chained with other RDDs (SPARK-4433) (20 milliseconds) [info] - zipWithUniqueId (36 milliseconds) [info] - retag with implicit ClassTag (20 milliseconds) [info] - parent method (3 milliseconds) [info] - getNarrowAncestors (11 milliseconds) [info] - getNarrowAncestors with multiple parents (12 milliseconds) [info] - getNarrowAncestors with cycles (12 milliseconds) [info] - task serialization exception should not hang scheduler (22 milliseconds) [info] - RDD.partitions() fails fast when partitions indicies are incorrect (SPARK-13021) (2 milliseconds) [info] - nested RDDs are not supported (SPARK-5063) (16 milliseconds) [info] - actions cannot be performed inside of transformations (SPARK-5063) (15 milliseconds) [info] - custom RDD coalescer (206 milliseconds) [info] - SPARK-18406: race between end-of-task and completion iterator read lock release (17 milliseconds) [info] - SPARK-23496: order of input partitions can result in severe skew in coalesce (4 milliseconds) [info] - cannot run actions after SparkContext has been stopped (SPARK-5063) (133 milliseconds) [info] - cannot call methods on a stopped SparkContext (SPARK-5063) (1 millisecond) [info] - run Spark in yarn-cluster mode with additional jar (18 seconds, 27 milliseconds) [info] BasicSchedulerIntegrationSuite: [info] - super simple job (137 milliseconds) [info] - multi-stage job (155 milliseconds) [info] - job with fetch failure (317 milliseconds) [info] - job failure after 4 attempts (103 milliseconds) [info] OutputCommitCoordinatorSuite: [info] - Only one of two duplicate commit tasks should commit (55 milliseconds) [info] - If commit fails, if task is retried it should not be locked, and will succeed. (49 milliseconds) [info] - attaching and detaching a Streaming tab (1 second, 969 milliseconds) [info] FileBasedWriteAheadLogSuite: [info] - FileBasedWriteAheadLog - read all logs (33 milliseconds) [info] - FileBasedWriteAheadLog - write logs (17 milliseconds) [info] - FileBasedWriteAheadLog - read all logs after write (19 milliseconds) [info] - FileBasedWriteAheadLog - clean old logs (15 milliseconds) [info] - FileBasedWriteAheadLog - clean old logs synchronously (14 milliseconds) [info] - FileBasedWriteAheadLog - handling file errors while reading rotating logs (49 milliseconds) [info] - FileBasedWriteAheadLog - do not create directories or files unless write (2 milliseconds) [info] - FileBasedWriteAheadLog - parallel recovery not enabled if closeFileAfterWrite = false (13 milliseconds) [info] - FileBasedWriteAheadLog - seqToParIterator (96 milliseconds) [info] - FileBasedWriteAheadLogWriter - writing data (19 milliseconds) [info] - FileBasedWriteAheadLogWriter - syncing of data by writing and reading immediately (17 milliseconds) [info] - FileBasedWriteAheadLogReader - sequentially reading data (3 milliseconds) [info] - FileBasedWriteAheadLogReader - sequentially reading data written with writer (2 milliseconds) [info] - FileBasedWriteAheadLogReader - reading data written with writer after corrupted write (974 milliseconds) [info] - FileBasedWriteAheadLogReader - handles errors when file doesn't exist (2 milliseconds) [info] - FileBasedWriteAheadLogRandomReader - reading data using random reader (8 milliseconds) [info] - FileBasedWriteAheadLogRandomReader- reading data using random reader written with writer (5 milliseconds) [info] FileBasedWriteAheadLogWithFileCloseAfterWriteSuite: [info] - FileBasedWriteAheadLog - read all logs (29 milliseconds) [info] - FileBasedWriteAheadLog - write logs (27 milliseconds) [info] - FileBasedWriteAheadLog - read all logs after write (35 milliseconds) [info] - FileBasedWriteAheadLog - clean old logs (33 milliseconds) [info] - FileBasedWriteAheadLog - clean old logs synchronously (38 milliseconds) [info] - FileBasedWriteAheadLog - handling file errors while reading rotating logs (111 milliseconds) [info] - FileBasedWriteAheadLog - do not create directories or files unless write (1 millisecond) [info] - FileBasedWriteAheadLog - parallel recovery not enabled if closeFileAfterWrite = false (6 milliseconds) [info] - FileBasedWriteAheadLog - close after write flag (3 milliseconds) [info] BatchedWriteAheadLogSuite: [info] - BatchedWriteAheadLog - read all logs (33 milliseconds) [info] - BatchedWriteAheadLog - write logs (23 milliseconds) [info] - BatchedWriteAheadLog - read all logs after write (27 milliseconds) [info] - BatchedWriteAheadLog - clean old logs (14 milliseconds) [info] - BatchedWriteAheadLog - clean old logs synchronously (17 milliseconds) [info] - BatchedWriteAheadLog - handling file errors while reading rotating logs (89 milliseconds) [info] - BatchedWriteAheadLog - do not create directories or files unless write (2 milliseconds) [info] - BatchedWriteAheadLog - parallel recovery not enabled if closeFileAfterWrite = false (11 milliseconds) [info] - BatchedWriteAheadLog - serializing and deserializing batched records (2 milliseconds) [info] - BatchedWriteAheadLog - failures in wrappedLog get bubbled up (22 milliseconds) [info] - BatchedWriteAheadLog - name log with the highest timestamp of aggregated entries (19 milliseconds) [info] - BatchedWriteAheadLog - shutdown properly (1 millisecond) [info] - BatchedWriteAheadLog - fail everything in queue during shutdown (5 milliseconds) [info] BatchedWriteAheadLogWithCloseFileAfterWriteSuite: [info] - BatchedWriteAheadLog - read all logs (72 milliseconds) [info] - BatchedWriteAheadLog - write logs (33 milliseconds) [info] - BatchedWriteAheadLog - read all logs after write (50 milliseconds) [info] - BatchedWriteAheadLog - clean old logs (38 milliseconds) [info] - BatchedWriteAheadLog - clean old logs synchronously (35 milliseconds) [info] - BatchedWriteAheadLog - handling file errors while reading rotating logs (130 milliseconds) [info] - BatchedWriteAheadLog - do not create directories or files unless write (2 milliseconds) [info] - BatchedWriteAheadLog - parallel recovery not enabled if closeFileAfterWrite = false (7 milliseconds) [info] - BatchedWriteAheadLog - close after write flag (3 milliseconds) [info] CheckpointSuite: [info] - non-existent checkpoint dir (3 milliseconds) [info] - Job should not complete if all commits are denied (5 seconds, 6 milliseconds) [info] - Only authorized committer failures can clear the authorized committer lock (SPARK-6614) (7 milliseconds) [info] - SPARK-19631: Do not allow failed attempts to be authorized for committing (4 milliseconds) [info] - SPARK-24589: Differentiate tasks from different stage attempts (4 milliseconds) [info] - SPARK-24589: Make sure stage state is cleaned up (853 milliseconds) [info] TaskMetricsSuite: [info] - mutating values (1 millisecond) [info] - mutating shuffle read metrics values (0 milliseconds) [info] - mutating shuffle write metrics values (1 millisecond) [info] - mutating input metrics values (0 milliseconds) [info] - mutating output metrics values (0 milliseconds) [info] - merging multiple shuffle read metrics (1 millisecond) [info] - additional accumulables (1 millisecond) [info] OutputCommitCoordinatorIntegrationSuite: [info] - exception thrown in OutputCommitter.commitTask() (98 milliseconds) [info] UIUtilsSuite: [info] - makeDescription(plainText = false) (22 milliseconds) [info] - makeDescription(plainText = true) (7 milliseconds) [info] - SPARK-11906: Progress bar should not overflow because of speculative tasks (2 milliseconds) [info] - decodeURLParameter (SPARK-12708: Sorting task error in Stages Page when yarn mode.) (0 milliseconds) [info] - SPARK-20393: Prevent newline characters in parameters. (1 millisecond) [info] - SPARK-20393: Prevent script from parameters running on page. (0 milliseconds) [info] - SPARK-20393: Prevent javascript from parameters running on page. (1 millisecond) [info] - SPARK-20393: Prevent links from parameters on page. (0 milliseconds) [info] - SPARK-20393: Prevent popups from parameters on page. (0 milliseconds) [info] SumEvaluatorSuite: [info] - correct handling of count 1 (2 milliseconds) [info] - correct handling of count 0 (0 milliseconds) [info] - correct handling of NaN (1 millisecond) [info] - correct handling of > 1 values (9 milliseconds) [info] - test count > 1 (1 millisecond) [info] ApplicationCacheSuite: [info] - Completed UI get (34 milliseconds) [info] - Test that if an attempt ID is set, it must be used in lookups (3 milliseconds) [info] - Incomplete apps refreshed (9 milliseconds) [info] - Large Scale Application Eviction (213 milliseconds) [info] - Attempts are Evicted (13 milliseconds) [info] - redirect includes query params (26 milliseconds) [info] RpcAddressSuite: [info] - hostPort (0 milliseconds) [info] - fromSparkURL (0 milliseconds) [info] - fromSparkURL: a typo url (0 milliseconds) [info] - fromSparkURL: invalid scheme (1 millisecond) [info] - toSparkURL (0 milliseconds) [info] HistoryServerSuite: [info] - application list json (1 second, 248 milliseconds) [info] - completed app list json (22 milliseconds) [info] - running app list json (9 milliseconds) [info] - minDate app list json (10 milliseconds) [info] - maxDate app list json (8 milliseconds) [info] - maxDate2 app list json (11 milliseconds) [info] - minEndDate app list json (9 milliseconds) [info] - maxEndDate app list json (9 milliseconds) [info] - minEndDate and maxEndDate app list json (7 milliseconds) [info] - minDate and maxEndDate app list json (8 milliseconds) [info] - limit app list json (9 milliseconds) [info] - one app json (70 milliseconds) [info] - one app multi-attempt json (6 milliseconds) [info] - job list json (348 milliseconds) [info] - job list from multi-attempt app json(1) (255 milliseconds) [info] - job list from multi-attempt app json(2) (195 milliseconds) [info] - one job json (7 milliseconds) [info] - succeeded job list json (7 milliseconds) [info] - succeeded&failed job list json (10 milliseconds) [info] - executor list json (12 milliseconds) [info] - stage list json (68 milliseconds) [info] - complete stage list json (10 milliseconds) [info] - failed stage list json (8 milliseconds) [info] - one stage json (33 milliseconds) [info] - one stage attempt json (39 milliseconds) [info] - stage task summary w shuffle write (377 milliseconds) [info] - stage task summary w shuffle read (23 milliseconds) [info] - stage task summary w/ custom quantiles (29 milliseconds) [info] - stage task list (16 milliseconds) [info] - stage task list w/ offset & length (34 milliseconds) [info] - stage task list w/ sortBy (21 milliseconds) [info] - stage task list w/ sortBy short names: -runtime (19 milliseconds) [info] - stage task list w/ sortBy short names: runtime (17 milliseconds) [info] - stage list with accumulable json (23 milliseconds) [info] - stage with accumulable json (28 milliseconds) [info] - stage task list from multi-attempt app json(1) (12 milliseconds) [info] - stage task list from multi-attempt app json(2) (19 milliseconds) [info] - blacklisting for stage (226 milliseconds) [info] - blacklisting node for stage (207 milliseconds) [info] - rdd list storage json (16 milliseconds) [info] - executor node blacklisting (163 milliseconds) [info] - executor node blacklisting unblacklisting (166 milliseconds) [info] - executor memory usage (8 milliseconds) [info] - app environment (53 milliseconds) [info] - download all logs for app with multiple attempts (123 milliseconds) [info] - download one log for app with multiple attempts (129 milliseconds) [info] - response codes on bad paths (26 milliseconds) [info] - automatically retrieve uiRoot from request through Knox (38 milliseconds) [info] - static relative links are prefixed with uiRoot (spark.ui.proxyBase) (8 milliseconds) [info] - /version api endpoint (7 milliseconds) [info] - basic rdd checkpoints + dstream graph checkpoint recovery (8 seconds, 919 milliseconds) [info] - recovery of conf through checkpoints (215 milliseconds) [info] - get correct spark.driver.[host|port] from checkpoint (178 milliseconds) [info] - SPARK-30199 get ui port and blockmanager port (112 milliseconds) ------------------------------------------- Time: 500 ms ------------------------------------------- (a,2) (b,1) ------------------------------------------- Time: 1000 ms ------------------------------------------- (,2) ------------------------------------------- Time: 1500 ms ------------------------------------------- ------------------------------------------- Time: 1500 ms ------------------------------------------- ------------------------------------------- Time: 2000 ms ------------------------------------------- (a,2) (b,1) ------------------------------------------- Time: 2500 ms ------------------------------------------- (,2) ------------------------------------------- Time: 3000 ms ------------------------------------------- [info] - recovery with map and reduceByKey operations (465 milliseconds) ------------------------------------------- Time: 500 ms ------------------------------------------- (a,1) ------------------------------------------- Time: 1000 ms ------------------------------------------- (a,2) ------------------------------------------- Time: 1500 ms ------------------------------------------- (a,3) ------------------------------------------- Time: 2000 ms ------------------------------------------- (a,4) ------------------------------------------- Time: 2500 ms ------------------------------------------- (a,4) ------------------------------------------- Time: 3000 ms ------------------------------------------- (a,4) ------------------------------------------- Time: 3500 ms ------------------------------------------- (a,4) ------------------------------------------- Time: 3500 ms ------------------------------------------- (a,4) ------------------------------------------- Time: 4000 ms ------------------------------------------- (a,4) ------------------------------------------- Time: 4500 ms ------------------------------------------- (a,4) ------------------------------------------- Time: 5000 ms ------------------------------------------- (a,4) [info] - recovery with invertible reduceByKeyAndWindow operation (946 milliseconds) ------------------------------------------- Time: 500 ms ------------------------------------------- (a,2) (b,1) ------------------------------------------- Time: 1000 ms ------------------------------------------- (,2) ------------------------------------------- Time: 1500 ms ------------------------------------------- ------------------------------------------- Time: 1500 ms ------------------------------------------- ------------------------------------------- Time: 2000 ms ------------------------------------------- (a,2) (b,1) [info] - run Spark in yarn-cluster mode unsuccessfully (16 seconds, 34 milliseconds) ------------------------------------------- Time: 2500 ms ------------------------------------------- (,2) ------------------------------------------- Time: 3000 ms ------------------------------------------- [info] - recovery with saveAsHadoopFiles operation (871 milliseconds) ------------------------------------------- Time: 500 ms ------------------------------------------- (a,2) (b,1) ------------------------------------------- Time: 1000 ms ------------------------------------------- (,2) ------------------------------------------- Time: 1500 ms ------------------------------------------- ------------------------------------------- Time: 1500 ms ------------------------------------------- ------------------------------------------- Time: 2000 ms ------------------------------------------- (a,2) (b,1) [info] - ajax rendered relative links are prefixed with uiRoot (spark.ui.proxyBase) (4 seconds, 298 milliseconds) [info] - security manager starts with spark.authenticate set (45 milliseconds) ------------------------------------------- Time: 2500 ms ------------------------------------------- (,2) ------------------------------------------- Time: 3000 ms ------------------------------------------- [info] - recovery with saveAsNewAPIHadoopFiles operation (840 milliseconds) ------------------------------------------- Time: 500 ms ------------------------------------------- (b,1) (a,2) ------------------------------------------- Time: 1000 ms ------------------------------------------- (,2) ------------------------------------------- Time: 1500 ms ------------------------------------------- ------------------------------------------- Time: 1500 ms ------------------------------------------- ------------------------------------------- Time: 2000 ms ------------------------------------------- (b,1) (a,2) ------------------------------------------- Time: 2500 ms ------------------------------------------- (,2) ------------------------------------------- Time: 3000 ms ------------------------------------------- [info] - recovery with saveAsHadoopFile inside transform operation (1 second, 33 milliseconds) ------------------------------------------- Time: 500 ms ------------------------------------------- (a,1) ------------------------------------------- Time: 1000 ms ------------------------------------------- (a,2) ------------------------------------------- Time: 1500 ms ------------------------------------------- (a,3) ------------------------------------------- Time: 2000 ms ------------------------------------------- (a,4) ------------------------------------------- Time: 2500 ms ------------------------------------------- (a,5) ------------------------------------------- Time: 3000 ms ------------------------------------------- (a,6) ------------------------------------------- Time: 3500 ms ------------------------------------------- (a,7) ------------------------------------------- Time: 3500 ms ------------------------------------------- (a,7) ------------------------------------------- Time: 4000 ms ------------------------------------------- (a,8) ------------------------------------------- Time: 4500 ms ------------------------------------------- (a,9) ------------------------------------------- Time: 5000 ms ------------------------------------------- (a,10) [info] - recovery with updateStateByKey operation (808 milliseconds) [info] - incomplete apps get refreshed (4 seconds, 334 milliseconds) [info] - recovery maintains rate controller (2 seconds, 657 milliseconds) [info] - ui and api authorization checks (939 milliseconds) [info] - access history application defaults to the last attempt id (253 milliseconds) [info] JVMObjectTrackerSuite: [info] - JVMObjectId does not take null IDs (2 milliseconds) [info] - JVMObjectTracker (2 milliseconds) [info] BlockManagerSuite: [info] - StorageLevel object caching (0 milliseconds) [info] - BlockManagerId object caching (1 millisecond) [info] - compacted topic (2 minutes, 5 seconds) [info] - BlockManagerId.isDriver() backwards-compatibility with legacy driver ids (SPARK-6716) (0 milliseconds) [info] - master + 1 manager interaction (40 milliseconds) [info] - master + 2 managers interaction (113 milliseconds) [info] - iterator boundary conditions (280 milliseconds) [info] - executor sorting (11 milliseconds) [info] - removing block (115 milliseconds) [info] - removing rdd (43 milliseconds) [info] - removing broadcast (248 milliseconds) [info] - reregistration on heart beat (34 milliseconds) [info] - reregistration on block update (36 milliseconds) Exception in thread "streaming-job-executor-0" java.lang.Error: java.lang.InterruptedException at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1155) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:206) at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:222) at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:157) at org.apache.spark.util.ThreadUtils$.awaitReady(ThreadUtils.scala:243) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:750) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126) at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:990) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) at org.apache.spark.rdd.RDD.withScope(RDD.scala:385) at org.apache.spark.rdd.RDD.collect(RDD.scala:989) at org.apache.spark.streaming.TestOutputStream$$anonfun$$lessinit$greater$1.apply(TestSuiteBase.scala:100) at org.apache.spark.streaming.TestOutputStream$$anonfun$$lessinit$greater$1.apply(TestSuiteBase.scala:99) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50) at scala.util.Try$.apply(Try.scala:192) at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:257) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:256) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ... 2 more [info] - reregistration doesn't dead lock (409 milliseconds) [info] - correct BlockResult returned from get() calls (41 milliseconds) [info] - optimize a location order of blocks without topology information (29 milliseconds) [info] - optimize a location order of blocks with topology information (30 milliseconds) [info] - SPARK-9591: getRemoteBytes from another location when Exception throw (138 milliseconds) [info] - SPARK-14252: getOrElseUpdate should still read from remote storage (76 milliseconds) [info] - in-memory LRU storage (31 milliseconds) [info] - recovery with file input stream (3 seconds, 11 milliseconds) [info] - in-memory LRU storage with serialization (73 milliseconds) [info] - in-memory LRU storage with off-heap (124 milliseconds) [info] - in-memory LRU for partitions of same RDD (28 milliseconds) [info] - DStreamCheckpointData.restore invoking times (342 milliseconds) [info] - in-memory LRU for partitions of multiple RDDs (26 milliseconds) [info] DirectKafkaStreamSuite: [info] - on-disk storage (encryption = off) (201 milliseconds) [info] - on-disk storage (encryption = on) (82 milliseconds) [info] - disk and memory storage (encryption = off) (72 milliseconds) [info] - disk and memory storage (encryption = on) (52 milliseconds) [info] - disk and memory storage with getLocalBytes (encryption = off) (49 milliseconds) [info] - recovery from checkpoint contains array object (798 milliseconds) [info] - disk and memory storage with getLocalBytes (encryption = on) (57 milliseconds) [info] - SPARK-11267: the race condition of two checkpoints in a batch (52 milliseconds) [info] - SPARK-28912: Fix MatchError in getCheckpointFiles (23 milliseconds) [info] - disk and memory storage with serialization (encryption = off) (81 milliseconds) [info] - disk and memory storage with serialization (encryption = on) (79 milliseconds) [info] - disk and memory storage with serialization and getLocalBytes (encryption = off) (59 milliseconds) [info] - disk and memory storage with serialization and getLocalBytes (encryption = on) (70 milliseconds) [info] - SPARK-6847: stack overflow when updateStateByKey is followed by a checkpointed dstream (442 milliseconds) [info] MapWithStateSuite: [info] - state - get, exists, update, remove, (3 milliseconds) [info] - disk and off-heap memory storage (encryption = off) (91 milliseconds) [info] - disk and off-heap memory storage (encryption = on) (109 milliseconds) [info] - disk and off-heap memory storage with getLocalBytes (encryption = off) (72 milliseconds) [info] - disk and off-heap memory storage with getLocalBytes (encryption = on) (55 milliseconds) [info] - mapWithState - basic operations with simple API (420 milliseconds) [info] - LRU with mixed storage levels (encryption = off) (124 milliseconds) [info] - basic stream receiving with multiple topics and smallest starting offset (1 second, 690 milliseconds) [info] - LRU with mixed storage levels (encryption = on) (84 milliseconds) [info] - in-memory LRU with streams (encryption = off) (43 milliseconds) [info] - mapWithState - basic operations with advanced API (369 milliseconds) [info] - mapWithState - type inferencing and class tags (8 milliseconds) [info] - in-memory LRU with streams (encryption = on) (36 milliseconds) [info] - LRU with mixed storage levels and streams (encryption = off) (171 milliseconds) [info] - mapWithState - states as mapped data (391 milliseconds) [info] - LRU with mixed storage levels and streams (encryption = on) (212 milliseconds) [info] - negative byte values in ByteBufferInputStream (1 millisecond) [info] - overly large block (52 milliseconds) [info] - mapWithState - initial states, with nothing returned as from mapping function (357 milliseconds) [info] - block compression (409 milliseconds) [info] - mapWithState - state removing (426 milliseconds) [info] - block store put failure (13 milliseconds) [info] - turn off updated block statuses (32 milliseconds) [info] - updated block statuses (51 milliseconds) [info] - query block statuses (52 milliseconds) [info] - get matching blocks (40 milliseconds) [info] - SPARK-1194 regression: fix the same-RDD rule for cache replacement (32 milliseconds) [info] - safely unroll blocks through putIterator (disk) (44 milliseconds) [info] - read-locked blocks cannot be evicted from memory (124 milliseconds) [info] - remove block if a read fails due to missing DiskStore files (SPARK-15736) (206 milliseconds) [info] - mapWithState - state timing out (1 second, 79 milliseconds) [info] - SPARK-13328: refresh block locations (fetch should fail after hitting a threshold) (38 milliseconds) [info] - mapWithState - checkpoint durations (59 milliseconds) [info] - SPARK-13328: refresh block locations (fetch should succeed after location refresh) (31 milliseconds) ------------------------------------------- Time: 1000 ms ------------------------------------------- ------------------------------------------- Time: 2000 ms ------------------------------------------- (a,1) [info] - SPARK-17484: block status is properly updated following an exception in put() (100 milliseconds) ------------------------------------------- Time: 3000 ms ------------------------------------------- (a,2) (b,1) [info] - pattern based subscription (2 seconds, 756 milliseconds) [info] - SPARK-17484: master block locations are updated following an invalid remote block fetch (90 milliseconds) ------------------------------------------- Time: 3000 ms ------------------------------------------- (a,2) (b,1) ------------------------------------------- Time: 4000 ms ------------------------------------------- (a,3) (b,2) (c,1) ------------------------------------------- Time: 5000 ms ------------------------------------------- (c,1) (a,4) (b,3) ------------------------------------------- Time: 6000 ms ------------------------------------------- (b,3) (c,1) (a,5) [info] - receiving from largest starting offset (316 milliseconds) ------------------------------------------- Time: 7000 ms ------------------------------------------- (a,5) (b,3) (c,1) [info] - mapWithState - driver failure recovery (618 milliseconds) [info] BlockGeneratorSuite: [info] - block generation and data callbacks (34 milliseconds) [info] - stop ensures correct shutdown (233 milliseconds) [info] - creating stream by offset (357 milliseconds) [info] - block push errors are reported (32 milliseconds) [info] StreamingJobProgressListenerSuite: [info] - onBatchSubmitted, onBatchStarted, onBatchCompleted, onReceiverStarted, onReceiverError, onReceiverStopped (87 milliseconds) [info] - Remove the old completed batches when exceeding the limit (80 milliseconds) [info] - out-of-order onJobStart and onBatchXXX (149 milliseconds) [info] - detect memory leak (120 milliseconds) [info] ExecutorAllocationManagerSuite: [info] - basic functionality (64 milliseconds) [info] - requestExecutors policy (15 milliseconds) [info] - killExecutor policy (6 milliseconds) [info] - parameter validation (18 milliseconds) [info] - enabling and disabling (406 milliseconds) [info] RateLimiterSuite: [info] - rate limiter initializes even without a maxRate set (1 millisecond) [info] - rate limiter updates when below maxRate (1 millisecond) [info] - rate limiter stays below maxRate despite large updates (0 milliseconds) [info] StreamingContextSuite: [info] - from no conf constructor (74 milliseconds) [info] - from no conf + spark home (76 milliseconds) [info] - from no conf + spark home + env (64 milliseconds) [info] - from conf with settings (152 milliseconds) [info] - from existing SparkContext (80 milliseconds) [info] - from existing SparkContext with settings (108 milliseconds) [info] - from checkpoint (205 milliseconds) [info] - checkPoint from conf (94 milliseconds) [info] - state matching (0 milliseconds) [info] - start and stop state check (80 milliseconds) [info] - start with non-serializable DStream checkpoints (134 milliseconds) [info] - start failure should stop internal components (63 milliseconds) [info] - start should set local properties of streaming jobs correctly (276 milliseconds) [info] - start multiple times (99 milliseconds) [info] - stop multiple times (155 milliseconds) [info] - offset recovery (3 seconds, 55 milliseconds) [info] - stop before start (102 milliseconds) [info] - start after stop (59 milliseconds) [info] - stop only streaming context (157 milliseconds) [info] - stop(stopSparkContext=true) after stop(stopSparkContext=false) (70 milliseconds) [info] - offset recovery from kafka (637 milliseconds) [info] - Direct Kafka stream report input information (681 milliseconds) [info] - maxMessagesPerPartition with backpressure disabled (102 milliseconds) [info] - maxMessagesPerPartition with no lag (81 milliseconds) [info] - SPARK-20640: Shuffle registration timeout and maxAttempts conf are working (5 seconds, 246 milliseconds) [info] - maxMessagesPerPartition respects max rate (98 milliseconds) [info] - fetch remote block to local disk if block size is larger than threshold (34 milliseconds) [info] - query locations of blockIds (6 milliseconds) [info] CompactBufferSuite: [info] - empty buffer (1 millisecond) [info] - basic inserts (6 milliseconds) [info] - adding sequences (2 milliseconds) [info] - adding the same buffer to itself (2 milliseconds) [info] MasterSuite: [info] - can use a custom recovery mode factory (84 milliseconds) [info] - master correctly recover the application (93 milliseconds) [info] - master/worker web ui available (271 milliseconds) [info] - using rate controller (2 seconds, 44 milliseconds) [info] - backpressure.initialRate should honor maxRatePerPartition (382 milliseconds) [info] - use backpressure.initialRate with backpressure (345 milliseconds) [info] - maxMessagesPerPartition with zero offset and rate equal to the specified minimum with default 1 (59 milliseconds) [info] - stop gracefully (6 seconds, 175 milliseconds) [info] KafkaDataConsumerSuite: [info] - KafkaDataConsumer reuse in case of same groupId and TopicPartition (4 milliseconds) [info] - stop gracefully even if a receiver misses StopReceiver (787 milliseconds) [info] - concurrent use of KafkaDataConsumer (1 second, 251 milliseconds) [info] Test run started [info] Test org.apache.spark.streaming.kafka010.JavaLocationStrategySuite.testLocationStrategyConstructors started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.007s [info] Test run started [info] Test org.apache.spark.streaming.kafka010.JavaKafkaRDDSuite.testKafkaRDD started [info] Test run finished: 0 failed, 0 ignored, 1 total, 2.07s [info] Test run started [info] Test org.apache.spark.streaming.kafka010.JavaConsumerStrategySuite.testConsumerStrategyConstructors started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.002s [info] Test run started [info] Test org.apache.spark.streaming.kafka010.JavaDirectKafkaStreamSuite.testKafkaStream started [info] - run Spark in yarn-cluster mode failure after sc initialized (30 seconds, 33 milliseconds) [info] Test run finished: 0 failed, 0 ignored, 1 total, 2.074s [info] SparkAWSCredentialsBuilderSuite: [info] - should build DefaultCredentials when given no params (21 milliseconds) [info] - should build BasicCredentials (2 milliseconds) [info] - should build STSCredentials (1 millisecond) [info] - SparkAWSCredentials classes should be serializable (4 milliseconds) [info] KinesisCheckpointerSuite: [info] - checkpoint is not called twice for the same sequence number (38 milliseconds) [info] - checkpoint is called after sequence number increases (2 milliseconds) [info] - should checkpoint if we have exceeded the checkpoint interval (13 milliseconds) [info] - shouldn't checkpoint if we have not exceeded the checkpoint interval (1 millisecond) [info] - should not checkpoint for the same sequence number (2 milliseconds) [info] - removing checkpointer checkpoints one last time (1 millisecond) [info] - if checkpointing is going on, wait until finished before removing and checkpointing (91 milliseconds) [info] KinesisInputDStreamBuilderSuite: [info] - should raise an exception if the StreamingContext is missing (4 milliseconds) [info] - should raise an exception if the stream name is missing (4 milliseconds) [info] - should raise an exception if the checkpoint app name is missing (1 millisecond) [info] - should propagate required values to KinesisInputDStream (306 milliseconds) [info] - should propagate default values to KinesisInputDStream (5 milliseconds) [info] - should propagate custom non-auth values to KinesisInputDStream (16 milliseconds) [info] - old Api should throw UnsupportedOperationExceptionexception with AT_TIMESTAMP (2 milliseconds) [info] KinesisReceiverSuite: [info] - process records including store and set checkpointer (5 milliseconds) [info] - split into multiple processes if a limitation is set (2 milliseconds) [info] - shouldn't store and update checkpointer when receiver is stopped (2 milliseconds) [info] - shouldn't update checkpointer when exception occurs during store (6 milliseconds) [info] - shutdown should checkpoint if the reason is TERMINATE (6 milliseconds) [info] - shutdown should not checkpoint if the reason is something other than TERMINATE (1 millisecond) [info] - retry success on first attempt (1 millisecond) [info] - retry success on second attempt after a Kinesis throttling exception (12 milliseconds) [info] - retry success on second attempt after a Kinesis dependency exception (61 milliseconds) [info] - retry failed after a shutdown exception (3 milliseconds) [info] - retry failed after an invalid state exception (3 milliseconds) [info] - retry failed after unexpected exception (4 milliseconds) [info] - retry failed after exhausting all retries (66 milliseconds) [info] WithAggregationKinesisBackedBlockRDDSuite: [info] - Basic reading from Kinesis [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!! [info] - Read data available in both block manager and Kinesis [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!! [info] - Read data available only in block manager, not in Kinesis [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!! [info] - Read data available only in Kinesis, not in block manager [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!! [info] - Read data available partially in block manager, rest in Kinesis [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!! [info] - Test isBlockValid skips block fetching from block manager [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!! [info] - Test whether RDD is valid after removing blocks from block manager [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!! [info] WithoutAggregationKinesisBackedBlockRDDSuite: [info] - Basic reading from Kinesis [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!! [info] - Read data available in both block manager and Kinesis [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!! [info] - Read data available only in block manager, not in Kinesis [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!! [info] - Read data available only in Kinesis, not in block manager [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!! [info] - Read data available partially in block manager, rest in Kinesis [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!! [info] - Test isBlockValid skips block fetching from block manager [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!! [info] - Test whether RDD is valid after removing blocks from block manager [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!! [info] WithAggregationKinesisStreamSuite: [info] - KinesisUtils API (21 milliseconds) [info] - RDD generation (39 milliseconds) [info] - basic operation [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!! [info] - custom message handling [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!! [info] - Kinesis read with custom configurations (4 milliseconds) [info] - split and merge shards in a stream [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!! [info] - failure recovery [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!! [info] - Prepare KinesisTestUtils [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!! [info] WithoutAggregationKinesisStreamSuite: [info] - KinesisUtils API (4 milliseconds) [info] - RDD generation (6 milliseconds) [info] - basic operation [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!! [info] - custom message handling [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!! [info] - Kinesis read with custom configurations (4 milliseconds) [info] - split and merge shards in a stream [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!! [info] - failure recovery [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!! [info] - Prepare KinesisTestUtils [enable by setting env var ENABLE_KINESIS_TESTS=1] !!! IGNORED !!! [info] Test run started [info] Test org.apache.spark.streaming.kinesis.JavaKinesisStreamSuite.testCustomHandlerAwsStsCreds started [info] Test org.apache.spark.streaming.kinesis.JavaKinesisStreamSuite.testCustomHandlerAwsCreds started [info] Test org.apache.spark.streaming.kinesis.JavaKinesisStreamSuite.testCustomHandler started [info] Test org.apache.spark.streaming.kinesis.JavaKinesisStreamSuite.testAwsCreds started [info] Test org.apache.spark.streaming.kinesis.JavaKinesisStreamSuite.testKinesisStream started [info] Test run finished: 0 failed, 0 ignored, 5 total, 0.822s [info] Test run started [info] Test org.apache.spark.streaming.kinesis.JavaKinesisInputDStreamBuilderSuite.testJavaKinesisDStreamBuilderOldApi started [info] Test org.apache.spark.streaming.kinesis.JavaKinesisInputDStreamBuilderSuite.testJavaKinesisDStreamBuilder started [info] Test run finished: 0 failed, 0 ignored, 2 total, 0.281s [info] ScalaTest [info] Run completed in 6 minutes, 14 seconds. [info] Total number of tests run: 0 [info] Suites: completed 0, aborted 0 [info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0 [info] No tests were executed. [info] Passed: Total 104, Failed 0, Errors 0, Passed 103, Skipped 1 [info] ScalaTest [info] Run completed in 6 minutes, 14 seconds. [info] Total number of tests run: 0 [info] Suites: completed 0, aborted 0 [info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0 [info] No tests were executed. [info] Passed: Total 44, Failed 0, Errors 0, Passed 44 [info] ScalaTest [info] Run completed in 6 minutes, 13 seconds. [info] Total number of tests run: 85 [info] Suites: completed 8, aborted 0 [info] Tests: succeeded 85, failed 0, canceled 0, ignored 0, pending 0 [info] All tests passed. [info] Passed: Total 85, Failed 0, Errors 0, Passed 85 [info] ScalaTest [info] Run completed in 6 minutes, 14 seconds. [info] Total number of tests run: 5 [info] Suites: completed 1, aborted 0 [info] Tests: succeeded 5, failed 0, canceled 0, ignored 0, pending 0 [info] All tests passed. [info] Passed: Total 5, Failed 0, Errors 0, Passed 5 [info] ScalaTest [info] Run completed in 6 minutes, 10 seconds. [info] Total number of tests run: 0 [info] Suites: completed 0, aborted 0 [info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0 [info] No tests were executed. [info] ExecutorClassLoaderSuite: [info] - stop slow receiver gracefully (15 seconds, 889 milliseconds) [info] - registering and de-registering of streamingSource (62 milliseconds) [info] - SPARK-28709 registering and de-registering of progressListener (87 milliseconds) [info] - child over system classloader (1 second, 21 milliseconds) [info] - child first (48 milliseconds) [info] - parent first (52 milliseconds) [info] - child first can fall back (58 milliseconds) [info] - child first can fail (73 milliseconds) [info] - resource from parent (64 milliseconds) [info] - resources from parent (54 milliseconds) [info] - fetch classes using Spark's RpcEnv (298 milliseconds) [info] ReplSuite: [info] - awaitTermination (2 seconds, 70 milliseconds) [info] - awaitTermination after stop (70 milliseconds) [info] - awaitTermination with error in task (105 milliseconds) [info] - awaitTermination with error in job generation (477 milliseconds) [info] - awaitTerminationOrTimeout (1 second, 71 milliseconds) [info] - getOrCreate (628 milliseconds) [info] - getActive and getActiveOrCreate (135 milliseconds) [info] - getActiveOrCreate with checkpoint (481 milliseconds) [info] - multiple streaming contexts (56 milliseconds) [info] - DStream and generated RDD creation sites (520 milliseconds) [info] - throw exception on using active or stopped context (98 milliseconds) [info] - propagation of local properties (5 seconds, 28 milliseconds) [info] - queueStream doesn't support checkpointing (538 milliseconds) [info] - Creating an InputDStream but not using it should not crash (964 milliseconds) Spark context available as 'sc' (master = local, app id = local-1610933175413). Spark session available as 'spark'. [info] - run Python application in yarn-client mode (20 seconds, 31 milliseconds) [info] - master/worker web ui available with reverseProxy (30 seconds, 265 milliseconds) [info] - basic scheduling - spread out (61 milliseconds) [info] - basic scheduling - no spread out (61 milliseconds) [info] - basic scheduling with more memory - spread out (54 milliseconds) [info] - basic scheduling with more memory - no spread out (59 milliseconds) [info] - scheduling with max cores - spread out (67 milliseconds) [info] - scheduling with max cores - no spread out (63 milliseconds) [info] - scheduling with cores per executor - spread out (65 milliseconds) [info] - scheduling with cores per executor - no spread out (58 milliseconds) [info] - scheduling with cores per executor AND max cores - spread out (95 milliseconds) [info] - scheduling with cores per executor AND max cores - no spread out (113 milliseconds) [info] - scheduling with executor limit - spread out (61 milliseconds) [info] - scheduling with executor limit - no spread out (72 milliseconds) [info] - scheduling with executor limit AND max cores - spread out (58 milliseconds) [info] - scheduling with executor limit AND max cores - no spread out (52 milliseconds) [info] - scheduling with executor limit AND cores per executor - spread out (187 milliseconds) [info] - scheduling with executor limit AND cores per executor - no spread out (57 milliseconds) [info] - scheduling with executor limit AND cores per executor AND max cores - spread out (86 milliseconds) [info] - scheduling with executor limit AND cores per executor AND max cores - no spread out (56 milliseconds) [info] - SPARK-13604: Master should ask Worker kill unknown executors and drivers (216 milliseconds) [info] - SPARK-20529: Master should reply the address received from worker (90 milliseconds) [info] - SPARK-15236: use Hive catalog (6 seconds, 495 milliseconds) [info] - SPARK-19900: there should be a corresponding driver for the app after relaunching driver (2 seconds, 147 milliseconds) [info] CompletionIteratorSuite: [info] - basic test (1 millisecond) [info] - reference to sub iterator should not be available after completion (829 milliseconds) Spark context available as 'sc' (master = local, app id = local-1610933180712). Spark session available as 'spark'. [info] SparkListenerSuite: [info] - don't call sc.stop in listener (61 milliseconds) [info] - basic creation and shutdown of LiveListenerBus (5 milliseconds) [info] - bus.stop() waits for the event queue to completely drain (3 milliseconds) [info] - metrics for dropped listener events (3 milliseconds) [info] - basic creation of StageInfo (73 milliseconds) [info] - basic creation of StageInfo with shuffle (168 milliseconds) [info] - StageInfo with fewer tasks than partitions (60 milliseconds) [info] - SPARK-15236: use in-memory catalog (2 seconds, 861 milliseconds) [info] - local metrics (1 second, 296 milliseconds) [info] - onTaskGettingResult() called when result fetched remotely (402 milliseconds) [info] - onTaskGettingResult() not called when result sent directly (104 milliseconds) [info] - onTaskEnd() should be called for all started tasks, even after job has been killed (136 milliseconds) [info] - SparkListener moves on if a listener throws an exception (21 milliseconds) [info] - registering listeners via spark.extraListeners (197 milliseconds) Exception in thread "streaming-job-executor-0" java.lang.Error: java.lang.InterruptedException at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1155) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:206) at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:222) at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:157) at org.apache.spark.util.ThreadUtils$.awaitReady(ThreadUtils.scala:243) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:750) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126) at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:990) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) at org.apache.spark.rdd.RDD.withScope(RDD.scala:385) at org.apache.spark.rdd.RDD.collect(RDD.scala:989) at org.apache.spark.streaming.StreamingContextSuite$$anonfun$57$$anonfun$apply$21.apply(StreamingContextSuite.scala:850) at org.apache.spark.streaming.StreamingContextSuite$$anonfun$57$$anonfun$apply$21.apply(StreamingContextSuite.scala:848) at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:628) at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:628) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50) at scala.util.Try$.apply(Try.scala:192) at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:257) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:256) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ... 2 more [info] - add and remove listeners to/from LiveListenerBus queues (6 milliseconds) [info] - interrupt within listener is handled correctly: throw interrupt (42 milliseconds) [info] - interrupt within listener is handled correctly: set Thread interrupted (40 milliseconds) [info] - SPARK-30285: Fix deadlock in AsyncEventQueue.removeListenerOnError: throw interrupt (30 milliseconds) Spark context available as 'sc' (master = local, app id = local-1610933183522). Spark session available as 'spark'. [info] - SPARK-30285: Fix deadlock in AsyncEventQueue.removeListenerOnError: set Thread interrupted (25 milliseconds) [info] SortShuffleSuite: [info] - SPARK-18560 Receiver data should be deserialized properly. (9 seconds, 685 milliseconds) [info] - groupByKey without compression (249 milliseconds) [info] - SPARK-22955 graceful shutdown shouldn't lead to job generation error (478 milliseconds) [info] RateControllerSuite: [info] - RateController - rate controller publishes updates after batches complete (439 milliseconds) [info] - ReceiverRateController - published rates reach receivers (582 milliseconds) [info] FailureSuite: [info] - broadcast vars (4 seconds, 451 milliseconds) [info] - shuffle non-zero block size (3 seconds, 444 milliseconds) Spark context available as 'sc' (master = local, app id = local-1610933187559). Spark session available as 'spark'. [info] - line wrapper only initialized once when used as encoder outer scope (3 seconds, 157 milliseconds) [info] - shuffle serializer (3 seconds, 581 milliseconds) Spark context available as 'sc' (master = local-cluster[1,1,1024], app id = app-20210117172632-0000). Spark session available as 'spark'. // Exiting paste mode, now interpreting. [info] - define case class and create Dataset together with paste mode (5 seconds, 654 milliseconds) [info] - zero sized blocks (5 seconds, 530 milliseconds) Spark context available as 'sc' (master = local, app id = local-1610933196523). Spark session available as 'spark'. Spark context available as 'sc' (master = local, app id = local-1610933196523). Spark session available as 'spark'. [info] - :replay should work correctly (3 seconds, 163 milliseconds) [info] - run Python application in yarn-cluster mode (23 seconds, 32 milliseconds) Spark context available as 'sc' (master = local, app id = local-1610933199515). Spark session available as 'spark'. [info] - spark-shell should find imported types in class constructors and extends clause (2 seconds, 62 milliseconds) Spark context available as 'sc' (master = local, app id = local-1610933201674). Spark session available as 'spark'. [info] - zero sized blocks without kryo (6 seconds, 15 milliseconds) Spark context available as 'sc' (master = local, app id = local-1610933203764). Spark session available as 'spark'. [info] - spark-shell should shadow val/def definitions correctly (4 seconds, 445 milliseconds) Spark context available as 'sc' (master = local-cluster[1,1,1024], app id = app-20210117172646-0000). Spark session available as 'spark'. [info] - shuffle on mutable pairs (3 seconds, 670 milliseconds) // Exiting paste mode, now interpreting. [info] - SPARK-26633: ExecutorClassLoader.getResourceAsStream find REPL classes (4 seconds, 732 milliseconds) [info] SingletonReplSuite: [info] - sorting on mutable pairs (3 seconds, 880 milliseconds) Spark context available as 'sc' (master = local-cluster[2,1,1024], app id = app-20210117172650-0000). Spark session available as 'spark'. [info] - cogroup using mutable pairs (3 seconds, 709 milliseconds) [info] - simple foreach with accumulator (2 seconds, 508 milliseconds) [info] - external vars (1 second, 508 milliseconds) [info] - external classes (454 milliseconds) [info] - external functions (504 milliseconds) [info] - subtract mutable pairs (3 seconds, 629 milliseconds) [info] - external functions that access vars (2 seconds, 931 milliseconds) [info] - broadcast vars (1 second, 4 milliseconds) [info] - sort with Java non serializable class - Kryo (4 seconds, 10 milliseconds) [info] - run Python application in yarn-cluster mode using spark.yarn.appMasterEnv to override local envvar (23 seconds, 27 milliseconds) [info] - interacting with files (1 second, 528 milliseconds) [info] - local-cluster mode (2 seconds, 5 milliseconds) [info] - sort with Java non serializable class - Java (3 seconds, 196 milliseconds) [info] - shuffle with different compression settings (SPARK-3426) (478 milliseconds) [info] - SPARK-1199 two instances of same class don't type check. (1 second, 8 milliseconds) [info] - [SPARK-4085] rerun map stage if reduce stage cannot find its local shuffle file (374 milliseconds) [info] - SPARK-2452 compound statements. (302 milliseconds) [info] - cannot find its local shuffle file if no execution of the stage and rerun shuffle (110 milliseconds) [info] - metrics for shuffle without aggregation (295 milliseconds) [info] - metrics for shuffle with aggregation (622 milliseconds) [info] - multiple simultaneous attempts for one task (SPARK-8029) (95 milliseconds) [info] - SortShuffleManager properly cleans up files for shuffles that use the serialized path (226 milliseconds) [info] - SortShuffleManager properly cleans up files for shuffles that use the deserialized path (89 milliseconds) [info] TaskSchedulerImplSuite: [info] - Scheduler does not always schedule tasks on the same workers (849 milliseconds) [info] - Scheduler correctly accounts for multiple CPUs per task (45 milliseconds) [info] - Scheduler does not crash when tasks are not serializable (41 milliseconds) [info] - SPARK-2576 importing implicits (2 seconds, 505 milliseconds) [info] - concurrent attempts for the same stage only have one active taskset (56 milliseconds) [info] - don't schedule more tasks after a taskset is zombie (48 milliseconds) [info] - if a zombie attempt finishes, continue scheduling tasks for non-zombie attempts (55 milliseconds) [info] - tasks are not re-scheduled while executor loss reason is pending (48 milliseconds) [info] - scheduled tasks obey task and stage blacklists (156 milliseconds) [info] - scheduled tasks obey node and executor blacklists (99 milliseconds) [info] - abort stage when all executors are blacklisted and we cannot acquire new executor (57 milliseconds) [info] - SPARK-22148 abort timer should kick in when task is completely blacklisted & no new executor can be acquired (68 milliseconds) [info] - SPARK-22148 try to acquire a new executor when task is unschedulable with 1 executor (61 milliseconds) [info] - multiple failures with map (44 seconds, 68 milliseconds) [info] - SPARK-22148 abort timer should clear unschedulableTaskSetToExpiryTime for all TaskSets (74 milliseconds) [info] - SPARK-22148 Ensure we don't abort the taskSet if we haven't been completely blacklisted (58 milliseconds) [info] - Blacklisted node for entire task set prevents per-task blacklist checks: iteration 0 (97 milliseconds) [info] - Blacklisted node for entire task set prevents per-task blacklist checks: iteration 1 (99 milliseconds) [info] - Blacklisted node for entire task set prevents per-task blacklist checks: iteration 2 (88 milliseconds) [info] - Blacklisted node for entire task set prevents per-task blacklist checks: iteration 3 (84 milliseconds) [info] - Blacklisted node for entire task set prevents per-task blacklist checks: iteration 4 (81 milliseconds) [info] - Blacklisted node for entire task set prevents per-task blacklist checks: iteration 5 (87 milliseconds) [info] - Datasets and encoders (1 second, 504 milliseconds) [info] - Blacklisted node for entire task set prevents per-task blacklist checks: iteration 6 (83 milliseconds) [info] - Blacklisted node for entire task set prevents per-task blacklist checks: iteration 7 (82 milliseconds) [info] - Blacklisted node for entire task set prevents per-task blacklist checks: iteration 8 (83 milliseconds) [info] - Blacklisted node for entire task set prevents per-task blacklist checks: iteration 9 (94 milliseconds) [info] - Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 0 (134 milliseconds) [info] - Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 1 (92 milliseconds) [info] - Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 2 (106 milliseconds) [info] - Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 3 (98 milliseconds) [info] - Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 4 (99 milliseconds) [info] - SPARK-2632 importing a method from non serializable class and not using it. (1 second, 4 milliseconds) [info] - Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 5 (96 milliseconds) [info] - Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 6 (101 milliseconds) [info] - Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 7 (102 milliseconds) [info] - Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 8 (97 milliseconds) [info] - collecting objects of class defined in repl (503 milliseconds) [info] - Blacklisted executor for entire task set prevents per-task blacklist checks: iteration 9 (139 milliseconds) [info] - abort stage if executor loss results in unschedulability from previously failed tasks (51 milliseconds) [info] - don't abort if there is an executor available, though it hasn't had scheduled tasks yet (48 milliseconds) [info] - SPARK-16106 locality levels updated if executor added to existing host (52 milliseconds) [info] - scheduler checks for executors that can be expired from blacklist (54 milliseconds) [info] - if an executor is lost then the state for its running tasks is cleaned up (SPARK-18553) (49 milliseconds) [info] - if a task finishes with TaskState.LOST its executor is marked as dead (62 milliseconds) [info] - Locality should be used for bulk offers even with delay scheduling off (51 milliseconds) [info] - With delay scheduling off, tasks can be run at any locality level immediately (71 milliseconds) [info] - TaskScheduler should throw IllegalArgumentException when schedulingMode is not supported (60 milliseconds) [info] - Completions in zombie tasksets update status of non-zombie taskset (112 milliseconds) [info] - don't schedule for a barrier taskSet if available slots are less than pending tasks (47 milliseconds) [info] - schedule tasks for a barrier taskSet if all tasks can be launched together (59 milliseconds) [info] - SPARK-29263: barrier TaskSet can't schedule when higher prio taskset takes the slots (48 milliseconds) [info] - collecting objects of class defined in repl - shuffling (1 second, 6 milliseconds) [info] - cancelTasks shall kill all the running tasks and fail the stage (57 milliseconds) [info] - killAllTaskAttempts shall kill all the running tasks and not fail the stage (77 milliseconds) [info] - mark taskset for a barrier stage as zombie in case a task fails (47 milliseconds) [info] ChunkedByteBufferFileRegionSuite: [info] - transferTo can stop and resume correctly (2 milliseconds) [info] - transfer to with random limits (272 milliseconds) [info] CryptoStreamUtilsSuite: [info] - crypto configuration conversion (0 milliseconds) [info] - shuffle encryption key length should be 128 by default (1 millisecond) [info] - create 256-bit key (1 millisecond) [info] - create key with invalid length (0 milliseconds) [info] - serializer manager integration (4 milliseconds) [info] - replicating blocks of object with class defined in repl (1 second, 505 milliseconds) [info] - encryption key propagation to executors (2 seconds, 907 milliseconds) [info] - crypto stream wrappers (5 milliseconds) [info] - error handling wrapper (5 milliseconds) [info] RBackendSuite: [info] - close() clears jvmObjectTracker (1 millisecond) [info] NettyRpcAddressSuite: [info] - toString (1 millisecond) [info] - toString for client mode (0 milliseconds) [info] SparkContextSchedulerCreationSuite: [info] - bad-master (49 milliseconds) [info] - local (65 milliseconds) [info] - local-* (49 milliseconds) [info] - local-n (50 milliseconds) [info] - local-*-n-failures (50 milliseconds) [info] - local-n-failures (49 milliseconds) [info] - bad-local-n (48 milliseconds) [info] - bad-local-n-failures (46 milliseconds) [info] - local-default-parallelism (49 milliseconds) [info] - should clone and clean line object in ClosureCleaner (2 seconds, 505 milliseconds) [info] - local-cluster (258 milliseconds) [info] CommandUtilsSuite: [info] - set libraryPath correctly (29 milliseconds) [info] - auth secret shouldn't appear in java opts (95 milliseconds) [info] ImplicitOrderingSuite: [info] - basic inference of Orderings (83 milliseconds) [info] AsyncRDDActionsSuite: [info] - countAsync (15 milliseconds) [info] - collectAsync (15 milliseconds) [info] - foreachAsync (12 milliseconds) [info] - foreachPartitionAsync (15 milliseconds) [info] - SPARK-31399: should clone+clean line object w/ non-serializable state in ClosureCleaner (1 second, 17 milliseconds) [info] - SPARK-31399: ClosureCleaner should discover indirectly nested closure in inner class (1 second, 3 milliseconds) [info] - takeAsync (1 second, 432 milliseconds) [info] - async success handling (9 milliseconds) [info] - async failure handling (12 milliseconds) [info] - FutureAction result, infinite wait (8 milliseconds) [info] - FutureAction result, finite wait (7 milliseconds) [info] - FutureAction result, timeout (23 milliseconds) [info] - SimpleFutureAction callback must not consume a thread while waiting (34 milliseconds) [info] - ComplexFutureAction callback must not consume a thread while waiting (23 milliseconds) [info] JsonProtocolSuite: [info] - writeApplicationInfo (6 milliseconds) [info] - writeWorkerInfo (0 milliseconds) [info] - writeApplicationDescription (3 milliseconds) [info] - writeExecutorRunner (1 millisecond) [info] - writeDriverInfo (4 milliseconds) [info] - writeMasterState (1 millisecond) [info] - writeWorkerState (72 milliseconds) [info] VersionUtilsSuite: [info] - Parse Spark major version (2 milliseconds) [info] - Parse Spark minor version (1 millisecond) [info] - Parse Spark major and minor versions (2 milliseconds) [info] - Return short version number (2 milliseconds) [info] ShuffleBlockFetcherIteratorSuite: [info] - successful 3 local reads + 2 remote reads (53 milliseconds) [info] - release current unexhausted buffer in case the task completes early (14 milliseconds) [info] - fail all blocks if any of the remote request fails (23 milliseconds) [info] - newProductSeqEncoder with REPL defined class (453 milliseconds) [info] - retry corrupt blocks (20 milliseconds) [info] - big blocks are not checked for corruption (7 milliseconds) [info] - retry corrupt blocks (disabled) (14 milliseconds) [info] - Blocks should be shuffled to disk when size of the request is above the threshold(maxReqSizeShuffleToMem). (17 milliseconds) [info] - fail zero-size blocks (18 milliseconds) [info] BlockManagerMasterSuite: [info] - SPARK-31422: getMemoryStatus should not fail after BlockManagerMaster stops (3 milliseconds) [info] - SPARK-31422: getStorageStatus should not fail after BlockManagerMaster stops (1 millisecond) [info] DiskBlockManagerSuite: [info] - basic block creation (1 millisecond) [info] - enumerating blocks (13 milliseconds) [info] - SPARK-22227: non-block files are skipped (1 millisecond) [info] TaskSetManagerSuite: [info] - TaskSet with no preferences (53 milliseconds) [info] - multiple offers with no preferences (55 milliseconds) [info] - skip unsatisfiable locality levels (47 milliseconds) [info] - basic delay scheduling (50 milliseconds) [info] - we do not need to delay scheduling when we only have noPref tasks in the queue (48 milliseconds) [info] - delay scheduling with fallback (54 milliseconds) [info] - delay scheduling with failed hosts (48 milliseconds) [info] - task result lost (49 milliseconds) [info] - repeated failures lead to task set abortion (51 milliseconds) [info] - executors should be blacklisted after task failure, in spite of locality preferences (65 milliseconds) [info] - new executors get added and lost (47 milliseconds) [info] - Executors exit for reason unrelated to currently running tasks (44 milliseconds) [info] - test RACK_LOCAL tasks (46 milliseconds) [info] - do not emit warning when serialized task is small (43 milliseconds) [info] - user class path first in client mode (18 seconds, 33 milliseconds) [info] - emit warning when serialized task is large (54 milliseconds) [info] - Not serializable exception thrown if the task cannot be serialized (50 milliseconds) [info] GenerateUnsafeRowJoinerBitsetSuite: [info] - bitset concat: boundary size 0, 0 (705 milliseconds) [info] + num fields: 0 and 0 [info] + num fields: 0 and 0 [info] + num fields: 0 and 0 [info] + num fields: 0 and 0 [info] + num fields: 0 and 0 [info] - bitset concat: boundary size 0, 64 (62 milliseconds) [info] + num fields: 0 and 64 [info] + num fields: 0 and 64 [info] + num fields: 0 and 64 [info] + num fields: 0 and 64 [info] + num fields: 0 and 64 [info] - bitset concat: boundary size 64, 0 (30 milliseconds) [info] + num fields: 64 and 0 [info] + num fields: 64 and 0 [info] + num fields: 64 and 0 [info] + num fields: 64 and 0 [info] + num fields: 64 and 0 [info] - bitset concat: boundary size 64, 64 (45 milliseconds) [info] + num fields: 64 and 64 [info] + num fields: 64 and 64 [info] + num fields: 64 and 64 [info] + num fields: 64 and 64 [info] + num fields: 64 and 64 [info] - bitset concat: boundary size 0, 128 (31 milliseconds) [info] + num fields: 0 and 128 [info] + num fields: 0 and 128 [info] + num fields: 0 and 128 [info] + num fields: 0 and 128 [info] + num fields: 0 and 128 [info] - bitset concat: boundary size 128, 0 (33 milliseconds) [info] + num fields: 128 and 0 [info] + num fields: 128 and 0 [info] + num fields: 128 and 0 [info] + num fields: 128 and 0 [info] + num fields: 128 and 0 [info] - bitset concat: boundary size 128, 128 (118 milliseconds) [info] + num fields: 128 and 128 [info] + num fields: 128 and 128 [info] + num fields: 128 and 128 [info] + num fields: 128 and 128 [info] + num fields: 128 and 128 [info] - bitset concat: single word bitsets (20 milliseconds) [info] + num fields: 10 and 5 [info] + num fields: 10 and 5 [info] + num fields: 10 and 5 [info] + num fields: 10 and 5 [info] + num fields: 10 and 5 [info] - bitset concat: first bitset larger than a word (24 milliseconds) [info] + num fields: 67 and 5 [info] + num fields: 67 and 5 [info] + num fields: 67 and 5 [info] + num fields: 67 and 5 [info] + num fields: 67 and 5 [info] - bitset concat: second bitset larger than a word (25 milliseconds) [info] + num fields: 6 and 67 [info] + num fields: 6 and 67 [info] + num fields: 6 and 67 [info] + num fields: 6 and 67 [info] + num fields: 6 and 67 [info] - bitset concat: no reduction in bitset size (29 milliseconds) [info] + num fields: 33 and 34 [info] + num fields: 33 and 34 [info] + num fields: 33 and 34 [info] + num fields: 33 and 34 [info] + num fields: 33 and 34 [info] - abort the job if total size of results is too large (1 second, 532 milliseconds) Exception in thread "task-result-getter-3" java.lang.Error: java.lang.InterruptedException at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1155) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:206) at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:222) at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:227) at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:220) at org.apache.spark.network.BlockTransferService.fetchBlockSync(BlockTransferService.scala:121) at org.apache.spark.storage.BlockManager.getRemoteBytes(BlockManager.scala:757) at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply$mcV$sp(TaskResultGetter.scala:88) at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply(TaskResultGetter.scala:63) at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply(TaskResultGetter.scala:63) at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1945) at org.apache.spark.scheduler.TaskResultGetter$$anon$3.run(TaskResultGetter.scala:62) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ... 2 more [info] - bitset concat: two words (78 milliseconds) [info] + num fields: 120 and 95 [info] + num fields: 120 and 95 [info] + num fields: 120 and 95 [info] + num fields: 120 and 95 [info] + num fields: 120 and 95 [info] - bitset concat: bitset 65, 128 (58 milliseconds) [info] + num fields: 65 and 128 [info] + num fields: 65 and 128 [info] + num fields: 65 and 128 [info] + num fields: 65 and 128 [info] + num fields: 65 and 128 [info] - [SPARK-13931] taskSetManager should not send Resubmitted tasks after being a zombie (68 milliseconds) [info] - [SPARK-22074] Task killed by other attempt task should not be resubmitted (58 milliseconds) [info] - speculative and noPref task should be scheduled after node-local (50 milliseconds) [info] - node-local tasks should be scheduled right away when there are only node-local and no-preference tasks (46 milliseconds) [info] - SPARK-4939: node-local tasks should be scheduled right after process-local tasks finished (51 milliseconds) [info] - SPARK-4939: no-pref tasks should be scheduled after process-local tasks finished (53 milliseconds) [info] - Ensure TaskSetManager is usable after addition of levels (64 milliseconds) [info] - Test that locations with HDFSCacheTaskLocation are treated as PROCESS_LOCAL. (46 milliseconds) [info] - Test TaskLocation for different host type. (0 milliseconds) [info] - Kill other task attempts when one attempt belonging to the same task succeeds (62 milliseconds) [info] - Killing speculative tasks does not count towards aborting the taskset (69 milliseconds) [info] - SPARK-19868: DagScheduler only notified of taskEnd when state is ready (62 milliseconds) [info] - SPARK-17894: Verify TaskSetManagers for different stage attempts have unique names (49 milliseconds) [info] - don't update blacklist for shuffle-fetch failures, preemption, denied commits, or killed tasks (73 milliseconds) [info] - update application blacklist for shuffle-fetch (48 milliseconds) [info] - update blacklist before adding pending task to avoid race condition (51 milliseconds) [info] - SPARK-21563 context's added jars shouldn't change mid-TaskSet (49 milliseconds) [info] - [SPARK-24677] Avoid NoSuchElementException from MedianHeap (72 milliseconds) [info] - SPARK-24755 Executor loss can cause task to not be resubmitted (55 milliseconds) [info] - SPARK-13343 speculative tasks that didn't commit shouldn't be marked as success (67 milliseconds) [info] DiskBlockObjectWriterSuite: [info] - verify write metrics (54 milliseconds) [info] - verify write metrics on revert (49 milliseconds) [info] - Reopening a closed block writer (2 milliseconds) [info] - calling revertPartialWritesAndClose() on a partial write should truncate up to commit (5 milliseconds) [info] - calling revertPartialWritesAndClose() after commit() should have no effect (3 milliseconds) [info] - calling revertPartialWritesAndClose() on a closed block writer should have no effect (4 milliseconds) [info] - commit() and close() should be idempotent (5 milliseconds) [info] - revertPartialWritesAndClose() should be idempotent (2 milliseconds) [info] - commit() and close() without ever opening or writing (1 millisecond) [info] PartitioningSuite: [info] - HashPartitioner equality (0 milliseconds) [info] - RangePartitioner equality (71 milliseconds) [info] - RangePartitioner getPartition (142 milliseconds) [info] - RangePartitioner for keys that are not Comparable (but with Ordering) (18 milliseconds) [info] - RangPartitioner.sketch (35 milliseconds) [info] - RangePartitioner.determineBounds (0 milliseconds) [info] - RangePartitioner should run only one job if data is roughly balanced (1 second, 407 milliseconds) [info] - bitset concat: randomized tests (4 seconds, 106 milliseconds) [info] + num fields: 538 and 301 [info] + num fields: 682 and 487 [info] + num fields: 145 and 809 [info] + num fields: 901 and 758 [info] + num fields: 744 and 576 [info] + num fields: 162 and 148 [info] + num fields: 448 and 832 [info] + num fields: 118 and 807 [info] + num fields: 984 and 11 [info] + num fields: 638 and 543 [info] + num fields: 984 and 334 [info] + num fields: 706 and 152 [info] + num fields: 53 and 900 [info] + num fields: 270 and 284 [info] + num fields: 778 and 658 [info] + num fields: 297 and 81 [info] + num fields: 54 and 574 [info] + num fields: 331 and 553 [info] + num fields: 400 and 560 [info] ConstraintPropagationSuite: [info] - RangePartitioner should work well on unbalanced data (1 second, 126 milliseconds) [info] - RangePartitioner should return a single partition for empty RDDs (17 milliseconds) [info] - HashPartitioner not equal to RangePartitioner (12 milliseconds) [info] - partitioner preservation (58 milliseconds) [info] - partitioning Java arrays should fail (16 milliseconds) [info] - zero-length partitions should be correctly handled (101 milliseconds) [info] - Number of elements in RDD is less than number of partitions (11 milliseconds) [info] - defaultPartitioner (4 milliseconds) [info] - defaultPartitioner when defaultParallelism is set (4 milliseconds) [info] - propagating constraints in filters (384 milliseconds) [info] PartitionwiseSampledRDDSuite: [info] - propagating constraints in aggregate (57 milliseconds) [info] - propagating constraints in expand (49 milliseconds) [info] - seed distribution (61 milliseconds) [info] - concurrency (26 milliseconds) [info] - propagating constraints in aliases (86 milliseconds) [info] PartitionPruningRDDSuite: [info] - propagating constraints in union (47 milliseconds) [info] - propagating constraints in intersect (9 milliseconds) [info] - propagating constraints in except (5 milliseconds) [info] - Pruned Partitions inherit locality prefs correctly (2 milliseconds) [info] - propagating constraints in inner join (18 milliseconds) [info] - propagating constraints in left-semi join (7 milliseconds) [info] - propagating constraints in left-outer join (8 milliseconds) [info] - propagating constraints in right-outer join (7 milliseconds) [info] - Pruned Partitions can be unioned (33 milliseconds) [info] - propagating constraints in full-outer join (10 milliseconds) [info] - infer additional constraints in filters (7 milliseconds) [info] ClosureCleanerSuite2: [info] - infer constraints on cast (61 milliseconds) [info] - get inner closure classes (5 milliseconds) [info] - get outer classes and objects (2 milliseconds) [info] - get outer classes and objects with nesting (3 milliseconds) [info] - find accessed fields (4 milliseconds) [info] - find accessed fields with nesting (5 milliseconds) [info] - clean basic serializable closures (6 milliseconds) [info] - clean basic non-serializable closures (33 milliseconds) [info] - clean basic nested serializable closures (6 milliseconds) [info] - infer isnotnull constraints from compound expressions (78 milliseconds) [info] - infer IsNotNull constraints from non-nullable attributes (3 milliseconds) [info] - clean basic nested non-serializable closures (31 milliseconds) [info] - not infer non-deterministic constraints (13 milliseconds) [info] - clean complicated nested serializable closures (4 milliseconds) [info] - enable/disable constraint propagation (18 milliseconds) [info] - clean complicated nested non-serializable closures (36 milliseconds) [info] - verify nested LMF closures !!! CANCELED !!! (1 millisecond) [info] ClosureCleanerSuite2.supportsLMFs was false (ClosureCleanerSuite2.scala:579) [info] org.scalatest.exceptions.TestCanceledException: [info] at org.scalatest.Assertions$class.newTestCanceledException(Assertions.scala:531) [info] at org.scalatest.FunSuite.newTestCanceledException(FunSuite.scala:1560) [info] at org.scalatest.Assertions$AssertionsHelper.macroAssume(Assertions.scala:516) [info] at org.apache.spark.util.ClosureCleanerSuite2$$anonfun$31.apply$mcV$sp(ClosureCleanerSuite2.scala:579) [info] at org.apache.spark.util.ClosureCleanerSuite2$$anonfun$31.apply(ClosureCleanerSuite2.scala:578) [info] at org.apache.spark.util.ClosureCleanerSuite2$$anonfun$31.apply(ClosureCleanerSuite2.scala:578) [info] at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85) [info] at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104) [info] at org.scalatest.Transformer.apply(Transformer.scala:22) [info] at org.scalatest.Transformer.apply(Transformer.scala:20) [info] at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186) [info] at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:147) [info] at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:183) [info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:196) [info] at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:196) [info] at org.scalatest.SuperEngine.runTestImpl(Engine.scala:289) [info] at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:196) [info] at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:54) [info] at org.scalatest.BeforeAndAfterEach$class.runTest(BeforeAndAfterEach.scala:221) [info] at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:54) [info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:229) [info] at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:229) [info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:396) [info] at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:384) [info] at scala.collection.immutable.List.foreach(List.scala:392) [info] at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:384) [info] at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:379) [info] at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:461) [info] at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:229) [info] at org.scalatest.FunSuite.runTests(FunSuite.scala:1560) [info] at org.scalatest.Suite$class.run(Suite.scala:1147) [info] at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560) [info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:233) [info] at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:233) [info] at org.scalatest.SuperEngine.runImpl(Engine.scala:521) [info] at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:233) [info] at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:54) [info] at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:213) [info] at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:210) [info] at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:54) [info] at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:314) [info] at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:480) [info] at sbt.ForkMain$Run$2.call(ForkMain.java:296) [info] at sbt.ForkMain$Run$2.call(ForkMain.java:286) [info] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [info] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [info] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [info] at java.lang.Thread.run(Thread.java:748) [info] BufferHolderSparkSubmitSuite: [info] PrimitiveKeyOpenHashMapSuite: [info] - size for specialized, primitive key, value (int, int) (4 milliseconds) [info] - initialization (1 millisecond) [info] - basic operations (27 milliseconds) [info] - null values (1 millisecond) [info] - changeValue (5 milliseconds) [info] - inserting in capacity-1 map (2 milliseconds) [info] - contains (0 milliseconds) [info] StaticMemoryManagerSuite: [info] - single task requesting on-heap execution memory (3 milliseconds) [info] - two tasks requesting full on-heap execution memory (2 milliseconds) [info] - two tasks cannot grow past 1 / N of on-heap execution memory (2 milliseconds) [info] - tasks can block to get at least 1 / 2N of on-heap execution memory (303 milliseconds) [info] - TaskMemoryManager.cleanUpAllAllocatedMemory (304 milliseconds) [info] - tasks should not be granted a negative amount of execution memory (2 milliseconds) [info] - off-heap execution allocations cannot exceed limit (3 milliseconds) [info] - basic execution memory (6 milliseconds) [info] - basic storage memory (4 milliseconds) [info] - execution and storage isolation (2 milliseconds) [info] - unroll memory (2 milliseconds) [info] MutableURLClassLoaderSuite: [info] - child first (2 milliseconds) [info] - parent first (1 millisecond) [info] - child first can fall back (1 millisecond) [info] - child first can fail (1 millisecond) [info] - default JDK classloader get resources (0 milliseconds) [info] - parent first get resources (1 millisecond) [info] - child first get resources (5 milliseconds) [info] - driver sets context class loader in local mode (193 milliseconds) [info] SortingSuite: [info] - sortByKey (63 milliseconds) [info] - large array (69 milliseconds) [info] - large array with one split (53 milliseconds) [info] - large array with many partitions (276 milliseconds) [info] - sort descending (76 milliseconds) [info] - sort descending with one split (53 milliseconds) [info] - sort descending with many partitions (273 milliseconds) [info] - more partitions than elements (114 milliseconds) [info] - empty RDD (40 milliseconds) [info] - partition balancing (116 milliseconds) [info] - partition balancing for descending sort (101 milliseconds) [info] - get a range of elements in a sorted RDD that is on one partition (103 milliseconds) [info] - get a range of elements over multiple partitions in a descendingly sorted RDD (107 milliseconds) [info] - get a range of elements in an array not partitioned by a range partitioner (21 milliseconds) [info] - get a range of elements over multiple partitions but not taking up full partitions (124 milliseconds) [info] LocalCheckpointSuite: [info] - transform storage level (1 millisecond) [info] - basic lineage truncation (45 milliseconds) [info] - basic lineage truncation - caching before checkpointing (40 milliseconds) [info] - basic lineage truncation - caching after checkpointing (35 milliseconds) [info] - indirect lineage truncation (40 milliseconds) [info] - indirect lineage truncation - caching before checkpointing (40 milliseconds) [info] - indirect lineage truncation - caching after checkpointing (33 milliseconds) [info] - SPARK-22222: Buffer holder should be able to allocate memory larger than 1GB (4 seconds, 308 milliseconds) [info] CombiningLimitsSuite: [info] - limits: combines two limits (44 milliseconds) [info] - limits: combines three limits (10 milliseconds) [info] - limits: combines two limits after ColumnPruning (11 milliseconds) [info] EliminateSerializationSuite: [info] - back to back serialization (1 second, 476 milliseconds) [info] - back to back serialization with object change (50 milliseconds) [info] - back to back serialization in AppendColumns (81 milliseconds) [info] - back to back serialization in AppendColumns with object change (42 milliseconds) [info] InferFiltersFromConstraintsSuite: [info] - filter: filter out constraints in condition (21 milliseconds) [info] - single inner join: filter out values on either side on equi-join keys (38 milliseconds) [info] - single inner join: filter out nulls on either side on non equal keys (22 milliseconds) [info] - single inner join with pre-existing filters: filter out values on either side (24 milliseconds) [info] - single outer join: no null filters are generated (6 milliseconds) [info] - multiple inner joins: filter out values on all sides on equi-join keys (46 milliseconds) [info] - inner join with filter: filter out values on all sides on equi-join keys (15 milliseconds) [info] - inner join with alias: alias contains multiple attributes (35 milliseconds) [info] - inner join with alias: alias contains single attributes (30 milliseconds) [info] - generate correct filters for alias that don't produce recursive constraints (10 milliseconds) [info] - No inferred filter when constraint propagation is disabled (4 milliseconds) [info] - constraints should be inferred from aliased literals (13 milliseconds) [info] - SPARK-23405: left-semi equal-join should filter out null join keys on both sides (11 milliseconds) [info] - SPARK-21479: Outer join after-join filters push down to null-supplying side (16 milliseconds) [info] - SPARK-21479: Outer join pre-existing filters push down to null-supplying side (17 milliseconds) [info] - SPARK-21479: Outer join no filter push down to preserved side (12 milliseconds) [info] - SPARK-23564: left anti join should filter out null join keys on right side (9 milliseconds) [info] - SPARK-23564: left outer join should filter out null join keys on right side (8 milliseconds) [info] - SPARK-23564: right outer join should filter out null join keys on left side (8 milliseconds) [info] RewriteDistinctAggregatesSuite: [info] - single distinct group (13 milliseconds) [info] - single distinct group with partial aggregates (5 milliseconds) [info] - multiple distinct groups (14 milliseconds) [info] - multiple distinct groups with partial aggregates (13 milliseconds) [info] - multiple distinct groups with non-partial aggregates (10 milliseconds) [info] NullExpressionsSuite: [info] - isnull and isnotnull (827 milliseconds) [info] - AssertNotNUll (1 millisecond) [info] - IsNaN (205 milliseconds) [info] - nanvl (207 milliseconds) [info] - coalesce (1 second, 977 milliseconds) [info] - SPARK-16602 Nvl should support numeric-string cases (32 milliseconds) [info] - AtLeastNNonNulls (247 milliseconds) [info] - Coalesce should not throw 64kb exception (1 second, 436 milliseconds) [info] - SPARK-22705: Coalesce should use less global variables (2 milliseconds) [info] - AtLeastNNonNulls should not throw 64kb exception (1 second, 31 milliseconds) [info] AttributeSetSuite: [info] - sanity check (1 millisecond) [info] - checks by id not name (1 millisecond) [info] - ++ preserves AttributeSet (0 milliseconds) [info] - extracts all references (1 millisecond) [info] - dedups attributes (1 millisecond) [info] - subset (0 milliseconds) [info] - equality (1 millisecond) [info] - SPARK-18394 keep a deterministic output order along with attribute names and exprIds (1 millisecond) [info] - user class path first in cluster mode (20 seconds, 29 milliseconds) [info] ResolveInlineTablesSuite: [info] - validate inputs are foldable (5 milliseconds) [info] - validate input dimensions (1 millisecond) [info] - do not fire the rule if not all expressions are resolved (1 millisecond) [info] - convert (5 milliseconds) [info] - convert TimeZoneAwareExpression (2 milliseconds) [info] - nullability inference in convert (2 milliseconds) [info] SortOrderExpressionsSuite: [info] - checkpoint without draining iterator (10 seconds, 13 milliseconds) [info] - SortPrefix (533 milliseconds) [info] CheckCartesianProductsSuite: [info] - CheckCartesianProducts doesn't throw an exception if cross joins are enabled) (15 milliseconds) [info] - CheckCartesianProducts throws an exception for join types that require a join condition (11 milliseconds) [info] - CheckCartesianProducts doesn't throw an exception if a join condition is present (10 milliseconds) [info] - CheckCartesianProducts doesn't throw an exception if join types don't require conditions (5 milliseconds) [info] RuleExecutorSuite: [info] - only once (2 milliseconds) [info] - to fixed point (1 millisecond) [info] - to maxIterations (2 milliseconds) [info] - structural integrity checker (1 millisecond) [info] TypeCoercionSuite: [info] - implicit type cast - ByteType (3 milliseconds) [info] - implicit type cast - ShortType (1 millisecond) [info] - implicit type cast - IntegerType (0 milliseconds) [info] - implicit type cast - LongType (0 milliseconds) [info] - implicit type cast - FloatType (1 millisecond) [info] - implicit type cast - DoubleType (1 millisecond) [info] - implicit type cast - DecimalType(10, 2) (1 millisecond) [info] - implicit type cast - BinaryType (0 milliseconds) [info] - implicit type cast - BooleanType (1 millisecond) [info] - implicit type cast - StringType (2 milliseconds) [info] - implicit type cast - DateType (1 millisecond) [info] - implicit type cast - TimestampType (1 millisecond) [info] - implicit type cast - ArrayType(StringType) (8 milliseconds) [info] - implicit type cast between two Map types (11 milliseconds) [info] - implicit type cast - StructType().add("a1", StringType) (9 milliseconds) [info] - implicit type cast - NullType (1 millisecond) [info] - implicit type cast - CalendarIntervalType (1 millisecond) [info] - eligible implicit type cast - TypeCollection (2 milliseconds) [info] - ineligible implicit type cast - TypeCollection (0 milliseconds) [info] - tightest common bound for types (6 milliseconds) [info] - wider common type for decimal and array (9 milliseconds) [info] - cast NullType for expressions that implement ExpectsInputTypes (3 milliseconds) [info] - cast NullType for binary operators (3 milliseconds) [info] - coalesce casts (12 milliseconds) [info] - CreateArray casts (4 milliseconds) [info] - CreateMap casts (9 milliseconds) [info] - greatest/least cast (16 milliseconds) [info] - nanvl casts (5 milliseconds) [info] - type coercion for If (9 milliseconds) [info] - type coercion for CaseKeyWhen (9 milliseconds) [info] - type coercion for Stack (10 milliseconds) [info] - type coercion for Concat (11 milliseconds) [info] - type coercion for Elt (11 milliseconds) [info] - BooleanEquality type cast (4 milliseconds) [info] - BooleanEquality simplification (5 milliseconds) [info] - WidenSetOperationTypes for except and intersect (12 milliseconds) [info] - WidenSetOperationTypes for union (3 milliseconds) [info] - Transform Decimal precision/scale for union except and intersect (10 milliseconds) [info] - rule for date/timestamp operations (10 milliseconds) [info] - make sure rules do not fire early (4 milliseconds) [info] - SPARK-15776 Divide expression's dataType should be casted to Double or Decimal in aggregation function like sum (4 milliseconds) [info] - SPARK-17117 null type coercion in divide (2 milliseconds) [info] - binary comparison with string promotion (7 milliseconds) [info] - cast WindowFrame boundaries to the type they operate upon (5 milliseconds) [info] EncoderErrorMessageSuite: [info] - primitive types in encoders using Kryo serialization (5 milliseconds) [info] - primitive types in encoders using Java serialization (1 millisecond) [info] - nice error message for missing encoder (56 milliseconds) [info] RegexpExpressionsSuite: [info] - LIKE Pattern (1 second, 349 milliseconds) [info] - RLIKE Regular Expression (601 milliseconds) [info] - RegexReplace (159 milliseconds) [info] - SPARK-22570: RegExpReplace should not create a lot of global variables (0 milliseconds) [info] - RegexExtract (193 milliseconds) [info] - SPLIT (65 milliseconds) [info] - SPARK-30759: cache initialization for literal patterns (1 millisecond) [info] NondeterministicSuite: [info] - MonotonicallyIncreasingID (17 milliseconds) [info] - SparkPartitionID (20 milliseconds) [info] - InputFileName (21 milliseconds) [info] EncoderResolutionSuite: [info] - real type doesn't match encoder schema but they are compatible: product (44 milliseconds) [info] - real type doesn't match encoder schema but they are compatible: nested product (52 milliseconds) [info] - real type doesn't match encoder schema but they are compatible: tupled encoder (26 milliseconds) [info] - real type doesn't match encoder schema but they are compatible: primitive array (32 milliseconds) [info] - the real type is not compatible with encoder schema: primitive array (15 milliseconds) [info] - real type doesn't match encoder schema but they are compatible: array (243 milliseconds) [info] - real type doesn't match encoder schema but they are compatible: nested array (83 milliseconds) [info] - the real type is not compatible with encoder schema: non-array field (22 milliseconds) [info] - the real type is not compatible with encoder schema: array element type (36 milliseconds) [info] - the real type is not compatible with encoder schema: nested array element type (98 milliseconds) [info] - nullability of array type element should not fail analysis (26 milliseconds) [info] - the real number of fields doesn't match encoder schema: tuple encoder (14 milliseconds) [info] - the real number of fields doesn't match encoder schema: nested tuple encoder (28 milliseconds) [info] - nested case class can have different number of fields from the real schema (26 milliseconds) [info] - throw exception if real type is not compatible with encoder schema (36 milliseconds) [info] - cast from int to Long should success (1 millisecond) [info] - cast from date to java.sql.Timestamp should success (1 millisecond) [info] - cast from bigint to String should success (2 milliseconds) [info] - cast from int to java.math.BigDecimal should success (2 milliseconds) [info] - cast from bigint to java.math.BigDecimal should success (2 milliseconds) [info] - cast from bigint to Int should fail (1 millisecond) [info] - cast from timestamp to java.sql.Date should fail (1 millisecond) [info] - cast from decimal(38,18) to Double should fail (1 millisecond) [info] - cast from double to java.math.BigDecimal should fail (1 millisecond) [info] - cast from decimal(38,18) to Int should fail (1 millisecond) [info] - cast from string to Long should fail (1 millisecond) [info] ExpressionEncoderSuite: [info] - encode/decode for primitive boolean: false (codegen path) (39 milliseconds) [info] - encode/decode for primitive boolean: false (interpreted path) (4 milliseconds) [info] - encode/decode for primitive byte: -3 (codegen path) (13 milliseconds) [info] - encode/decode for primitive byte: -3 (interpreted path) (3 milliseconds) [info] - encode/decode for primitive short: -3 (codegen path) (11 milliseconds) [info] - encode/decode for primitive short: -3 (interpreted path) (3 milliseconds) [info] - encode/decode for primitive int: -3 (codegen path) (15 milliseconds) [info] - encode/decode for primitive int: -3 (interpreted path) (3 milliseconds) [info] - encode/decode for primitive long: -3 (codegen path) (12 milliseconds) [info] - encode/decode for primitive long: -3 (interpreted path) (4 milliseconds) [info] - encode/decode for primitive float: -3.7 (codegen path) (11 milliseconds) [info] - encode/decode for primitive float: -3.7 (interpreted path) (3 milliseconds) [info] - encode/decode for primitive double: -3.7 (codegen path) (12 milliseconds) [info] - encode/decode for primitive double: -3.7 (interpreted path) (3 milliseconds) [info] - encode/decode for boxed boolean: false (codegen path) (15 milliseconds) [info] - encode/decode for boxed boolean: false (interpreted path) (4 milliseconds) [info] - encode/decode for boxed byte: -3 (codegen path) (15 milliseconds) [info] - encode/decode for boxed byte: -3 (interpreted path) (3 milliseconds) [info] - encode/decode for boxed short: -3 (codegen path) (13 milliseconds) [info] - encode/decode for boxed short: -3 (interpreted path) (4 milliseconds) [info] - encode/decode for boxed int: -3 (codegen path) (19 milliseconds) [info] - encode/decode for boxed int: -3 (interpreted path) (4 milliseconds) [info] - encode/decode for boxed long: -3 (codegen path) (14 milliseconds) [info] - encode/decode for boxed long: -3 (interpreted path) (4 milliseconds) [info] - encode/decode for boxed float: -3.7 (codegen path) (14 milliseconds) [info] - encode/decode for boxed float: -3.7 (interpreted path) (4 milliseconds) [info] - encode/decode for boxed double: -3.7 (codegen path) (18 milliseconds) [info] - encode/decode for boxed double: -3.7 (interpreted path) (4 milliseconds) [info] - encode/decode for scala decimal: 32131413.211321313 (codegen path) (20 milliseconds) [info] - encode/decode for scala decimal: 32131413.211321313 (interpreted path) (4 milliseconds) [info] - encode/decode for java decimal: 231341.23123 (codegen path) (16 milliseconds) [info] - encode/decode for java decimal: 231341.23123 (interpreted path) (4 milliseconds) [info] - encode/decode for scala biginteger: 23134123123 (codegen path) (16 milliseconds) [info] - encode/decode for scala biginteger: 23134123123 (interpreted path) (5 milliseconds) [info] - encode/decode for java BigInteger: 23134123123 (codegen path) (17 milliseconds) [info] - encode/decode for java BigInteger: 23134123123 (interpreted path) (3 milliseconds) [info] - encode/decode for catalyst decimal: 32131413.211321313 (codegen path) (11 milliseconds) [info] - encode/decode for catalyst decimal: 32131413.211321313 (interpreted path) (3 milliseconds) [info] - encode/decode for string: hello (codegen path) (16 milliseconds) [info] - encode/decode for string: hello (interpreted path) (4 milliseconds) [info] - encode/decode for date: 2012-12-23 (codegen path) (18 milliseconds) [info] - encode/decode for date: 2012-12-23 (interpreted path) (4 milliseconds) [info] - encode/decode for timestamp: 2016-01-29 10:00:00.0 (codegen path) (20 milliseconds) [info] - encode/decode for timestamp: 2016-01-29 10:00:00.0 (interpreted path) (4 milliseconds) [info] - encode/decode for array of timestamp: [Ljava.sql.Timestamp;@2362fa04 (codegen path) (26 milliseconds) [info] - encode/decode for array of timestamp: [Ljava.sql.Timestamp;@2362fa04 (interpreted path) (17 milliseconds) [info] - encode/decode for binary: [B@4e4f0a75 (codegen path) (12 milliseconds) [info] - encode/decode for binary: [B@4e4f0a75 (interpreted path) (3 milliseconds) [info] - encode/decode for seq of int: List(31, -123, 4) (codegen path) (23 milliseconds) [info] - encode/decode for seq of int: List(31, -123, 4) (interpreted path) (13 milliseconds) [info] - encode/decode for seq of string: List(abc, xyz) (codegen path) (25 milliseconds) [info] - encode/decode for seq of string: List(abc, xyz) (interpreted path) (13 milliseconds) [info] - encode/decode for seq of string with null: List(abc, null, xyz) (codegen path) (25 milliseconds) [info] - encode/decode for seq of string with null: List(abc, null, xyz) (interpreted path) (16 milliseconds) [info] - encode/decode for empty seq of int: List() (codegen path) (16 milliseconds) [info] - encode/decode for empty seq of int: List() (interpreted path) (17 milliseconds) [info] - encode/decode for empty seq of string: List() (codegen path) (38 milliseconds) [info] - encode/decode for empty seq of string: List() (interpreted path) (19 milliseconds) [info] - encode/decode for seq of seq of int: List(List(31, -123), null, List(4, 67)) (codegen path) (31 milliseconds) [info] - encode/decode for seq of seq of int: List(List(31, -123), null, List(4, 67)) (interpreted path) (30 milliseconds) [info] - encode/decode for seq of seq of string: List(List(abc, xyz), List(null), null, List(1, null, 2)) (codegen path) (32 milliseconds) [info] - encode/decode for seq of seq of string: List(List(abc, xyz), List(null), null, List(1, null, 2)) (interpreted path) (18 milliseconds) [info] - encode/decode for array of int: [I@411708b5 (codegen path) (24 milliseconds) [info] - encode/decode for array of int: [I@411708b5 (interpreted path) (11 milliseconds) [info] - encode/decode for array of string: [Ljava.lang.String;@6d421023 (codegen path) (21 milliseconds) [info] - encode/decode for array of string: [Ljava.lang.String;@6d421023 (interpreted path) (13 milliseconds) [info] - encode/decode for array of string with null: [Ljava.lang.String;@6377a22c (codegen path) (22 milliseconds) [info] - encode/decode for array of string with null: [Ljava.lang.String;@6377a22c (interpreted path) (12 milliseconds) [info] - encode/decode for empty array of int: [I@2e397eb5 (codegen path) (14 milliseconds) [info] - encode/decode for empty array of int: [I@2e397eb5 (interpreted path) (13 milliseconds) [info] - encode/decode for empty array of string: [Ljava.lang.String;@2a8347a2 (codegen path) (23 milliseconds) [info] - encode/decode for empty array of string: [Ljava.lang.String;@2a8347a2 (interpreted path) (12 milliseconds) [info] - encode/decode for array of array of int: [[I@27d90629 (codegen path) (36 milliseconds) [info] - encode/decode for array of array of int: [[I@27d90629 (interpreted path) (18 milliseconds) [info] - encode/decode for array of array of string: [[Ljava.lang.String;@76234fb1 (codegen path) (28 milliseconds) [info] - encode/decode for array of array of string: [[Ljava.lang.String;@76234fb1 (interpreted path) (17 milliseconds) [info] - encode/decode for map: Map(1 -> a, 2 -> b) (codegen path) (33 milliseconds) [info] - encode/decode for map: Map(1 -> a, 2 -> b) (interpreted path) (5 milliseconds) [info] - encode/decode for map with null: Map(1 -> a, 2 -> null) (codegen path) (28 milliseconds) [info] - encode/decode for map with null: Map(1 -> a, 2 -> null) (interpreted path) (6 milliseconds) [info] - encode/decode for map of map: Map(1 -> Map(a -> 1), 2 -> Map(b -> 2)) (codegen path) (39 milliseconds) [info] - encode/decode for map of map: Map(1 -> Map(a -> 1), 2 -> Map(b -> 2)) (interpreted path) (5 milliseconds) [info] - encode/decode for null seq in tuple: (null) (codegen path) (22 milliseconds) [info] - encode/decode for null seq in tuple: (null) (interpreted path) (21 milliseconds) [info] - encode/decode for null map in tuple: (null) (codegen path) (36 milliseconds) [info] - encode/decode for null map in tuple: (null) (interpreted path) (6 milliseconds) [info] - encode/decode for list of int: List(1, 2) (codegen path) (22 milliseconds) [info] - encode/decode for list of int: List(1, 2) (interpreted path) (13 milliseconds) [info] - encode/decode for list with String and null: List(a, null) (codegen path) (27 milliseconds) [info] - encode/decode for list with String and null: List(a, null) (interpreted path) (14 milliseconds) [info] - encode/decode for udt with case class: UDTCaseClass(http://spark.apache.org/) (codegen path) (19 milliseconds) [info] - encode/decode for udt with case class: UDTCaseClass(http://spark.apache.org/) (interpreted path) (7 milliseconds) [info] - encode/decode for kryo string: hello (codegen path) (239 milliseconds) [info] - encode/decode for kryo string: hello (interpreted path) (13 milliseconds) [info] - encode/decode for kryo object: org.apache.spark.sql.catalyst.encoders.KryoSerializable@f (codegen path) (39 milliseconds) [info] - encode/decode for kryo object: org.apache.spark.sql.catalyst.encoders.KryoSerializable@f (interpreted path) (13 milliseconds) [info] - encode/decode for java string: hello (codegen path) (21 milliseconds) [info] - encode/decode for java string: hello (interpreted path) (3 milliseconds) [info] - encode/decode for java object: org.apache.spark.sql.catalyst.encoders.JavaSerializable@f (codegen path) (15 milliseconds) [info] - encode/decode for java object: org.apache.spark.sql.catalyst.encoders.JavaSerializable@f (interpreted path) (4 milliseconds) [info] - encode/decode for InnerClass: InnerClass(1) (codegen path) (18 milliseconds) [info] - encode/decode for InnerClass: InnerClass(1) (interpreted path) (7 milliseconds) [info] - encode/decode for array of inner class: [Lorg.apache.spark.sql.catalyst.encoders.ExpressionEncoderSuite$InnerClass;@17f36a30 (codegen path) (46 milliseconds) [info] - encode/decode for array of inner class: [Lorg.apache.spark.sql.catalyst.encoders.ExpressionEncoderSuite$InnerClass;@17f36a30 (interpreted path) (24 milliseconds) [info] - encode/decode for array of optional inner class: [Lscala.Option;@46be6d4 (codegen path) (50 milliseconds) [info] - encode/decode for array of optional inner class: [Lscala.Option;@46be6d4 (interpreted path) (33 milliseconds) [info] - multiple failures with updateStateByKey (38 seconds, 485 milliseconds) [info] - encode/decode for PrimitiveData: PrimitiveData(1,1,1.0,1.0,1,1,true) (codegen path) (38 milliseconds) [info] - encode/decode for PrimitiveData: PrimitiveData(1,1,1.0,1.0,1,1,true) (interpreted path) (11 milliseconds) [info] BasicOperationsSuite: [info] - encode/decode for OptionalData: OptionalData(Some(2),Some(2),Some(2.0),Some(2.0),Some(2),Some(2),Some(true),Some(PrimitiveData(1,1,1.0,1.0,1,1,true))) (codegen path) (99 milliseconds) [info] - encode/decode for OptionalData: OptionalData(Some(2),Some(2),Some(2.0),Some(2.0),Some(2),Some(2),Some(true),Some(PrimitiveData(1,1,1.0,1.0,1,1,true))) (interpreted path) (31 milliseconds) [info] - encode/decode for OptionalData: OptionalData(None,None,None,None,None,None,None,None) (codegen path) (43 milliseconds) [info] - encode/decode for OptionalData: OptionalData(None,None,None,None,None,None,None,None) (interpreted path) (29 milliseconds) [info] - encode/decode for Option in array: List(Some(1), None) (codegen path) (35 milliseconds) [info] - encode/decode for Option in array: List(Some(1), None) (interpreted path) (23 milliseconds) [info] - encode/decode for Option in map: Map(1 -> Some(10), 2 -> Some(20), 3 -> None) (codegen path) (36 milliseconds) [info] - encode/decode for Option in map: Map(1 -> Some(10), 2 -> Some(20), 3 -> None) (interpreted path) (7 milliseconds) [info] - map (254 milliseconds) [info] - encode/decode for BoxedData: BoxedData(1,1,1.0,1.0,1,1,true) (codegen path) (48 milliseconds) [info] - encode/decode for BoxedData: BoxedData(1,1,1.0,1.0,1,1,true) (interpreted path) (16 milliseconds) [info] - encode/decode for BoxedData: BoxedData(null,null,null,null,null,null,null) (codegen path) (19 milliseconds) [info] - encode/decode for BoxedData: BoxedData(null,null,null,null,null,null,null) (interpreted path) (14 milliseconds) [info] - encode/decode for RepeatedStruct: RepeatedStruct(List(PrimitiveData(1,1,1.0,1.0,1,1,true))) (codegen path) (71 milliseconds) [info] - encode/decode for RepeatedStruct: RepeatedStruct(List(PrimitiveData(1,1,1.0,1.0,1,1,true))) (interpreted path) (47 milliseconds) [info] - encode/decode for Tuple3: (1,test,PrimitiveData(1,1,1.0,1.0,1,1,true)) (codegen path) (43 milliseconds) [info] - encode/decode for Tuple3: (1,test,PrimitiveData(1,1,1.0,1.0,1,1,true)) (interpreted path) (13 milliseconds) [info] - flatMap (248 milliseconds) [info] - encode/decode for RepeatedData: RepeatedData(List(1, 2),List(1, null, 2),Map(1 -> 2),Map(1 -> null),PrimitiveData(1,1,1.0,1.0,1,1,true)) (codegen path) (78 milliseconds) [info] - encode/decode for RepeatedData: RepeatedData(List(1, 2),List(1, null, 2),Map(1 -> 2),Map(1 -> null),PrimitiveData(1,1,1.0,1.0,1,1,true)) (interpreted path) (45 milliseconds) [info] - encode/decode for NestedArray: NestedArray([[I@1956fd26) (codegen path) (29 milliseconds) [info] - encode/decode for NestedArray: NestedArray([[I@1956fd26) (interpreted path) (18 milliseconds) [info] - encode/decode for Tuple2: (Seq[(String, String)],List((a,b))) (codegen path) (44 milliseconds) [info] - encode/decode for Tuple2: (Seq[(String, String)],List((a,b))) (interpreted path) (29 milliseconds) [info] - filter (258 milliseconds) [info] - encode/decode for Tuple2: (Seq[(Int, Int)],List((1,2))) (codegen path) (42 milliseconds) [info] - encode/decode for Tuple2: (Seq[(Int, Int)],List((1,2))) (interpreted path) (31 milliseconds) [info] - encode/decode for Tuple2: (Seq[(Long, Long)],List((1,2))) (codegen path) (55 milliseconds) [info] - encode/decode for Tuple2: (Seq[(Long, Long)],List((1,2))) (interpreted path) (37 milliseconds) [info] - encode/decode for Tuple2: (Seq[(Float, Float)],List((1.0,2.0))) (codegen path) (47 milliseconds) [info] - encode/decode for Tuple2: (Seq[(Float, Float)],List((1.0,2.0))) (interpreted path) (31 milliseconds) [info] - encode/decode for Tuple2: (Seq[(Double, Double)],List((1.0,2.0))) (codegen path) (58 milliseconds) [info] - glom (287 milliseconds) [info] - encode/decode for Tuple2: (Seq[(Double, Double)],List((1.0,2.0))) (interpreted path) (27 milliseconds) [info] - encode/decode for Tuple2: (Seq[(Short, Short)],List((1,2))) (codegen path) (46 milliseconds) [info] - encode/decode for Tuple2: (Seq[(Short, Short)],List((1,2))) (interpreted path) (28 milliseconds) [info] - encode/decode for Tuple2: (Seq[(Byte, Byte)],List((1,2))) (codegen path) (43 milliseconds) [info] - encode/decode for Tuple2: (Seq[(Byte, Byte)],List((1,2))) (interpreted path) (26 milliseconds) [info] - mapPartitions (244 milliseconds) [info] - encode/decode for Tuple2: (Seq[(Boolean, Boolean)],List((true,false))) (codegen path) (80 milliseconds) [info] - encode/decode for Tuple2: (Seq[(Boolean, Boolean)],List((true,false))) (interpreted path) (27 milliseconds) [info] - encode/decode for Tuple2: (ArrayBuffer[(String, String)],ArrayBuffer((a,b))) (codegen path) (50 milliseconds) [info] - encode/decode for Tuple2: (ArrayBuffer[(String, String)],ArrayBuffer((a,b))) (interpreted path) (26 milliseconds) [info] - encode/decode for Tuple2: (ArrayBuffer[(Int, Int)],ArrayBuffer((1,2))) (codegen path) (45 milliseconds) [info] - encode/decode for Tuple2: (ArrayBuffer[(Int, Int)],ArrayBuffer((1,2))) (interpreted path) (26 milliseconds) [info] - encode/decode for Tuple2: (ArrayBuffer[(Long, Long)],ArrayBuffer((1,2))) (codegen path) (43 milliseconds) [info] - encode/decode for Tuple2: (ArrayBuffer[(Long, Long)],ArrayBuffer((1,2))) (interpreted path) (26 milliseconds) [info] - encode/decode for Tuple2: (ArrayBuffer[(Float, Float)],ArrayBuffer((1.0,2.0))) (codegen path) (52 milliseconds) [info] - encode/decode for Tuple2: (ArrayBuffer[(Float, Float)],ArrayBuffer((1.0,2.0))) (interpreted path) (29 milliseconds) [info] - encode/decode for Tuple2: (ArrayBuffer[(Double, Double)],ArrayBuffer((1.0,2.0))) (codegen path) (45 milliseconds) [info] - encode/decode for Tuple2: (ArrayBuffer[(Double, Double)],ArrayBuffer((1.0,2.0))) (interpreted path) (28 milliseconds) [info] - encode/decode for Tuple2: (ArrayBuffer[(Short, Short)],ArrayBuffer((1,2))) (codegen path) (43 milliseconds) [info] - encode/decode for Tuple2: (ArrayBuffer[(Short, Short)],ArrayBuffer((1,2))) (interpreted path) (26 milliseconds) [info] - encode/decode for Tuple2: (ArrayBuffer[(Byte, Byte)],ArrayBuffer((1,2))) (codegen path) (43 milliseconds) [info] - encode/decode for Tuple2: (ArrayBuffer[(Byte, Byte)],ArrayBuffer((1,2))) (interpreted path) (31 milliseconds) [info] - repartition (more partitions) (542 milliseconds) [info] - encode/decode for Tuple2: (ArrayBuffer[(Boolean, Boolean)],ArrayBuffer((true,false))) (codegen path) (50 milliseconds) [info] - encode/decode for Tuple2: (ArrayBuffer[(Boolean, Boolean)],ArrayBuffer((true,false))) (interpreted path) (29 milliseconds) [info] - encode/decode for Tuple2: (Seq[Seq[(Int, Int)]],List(List((1,2)))) (codegen path) (64 milliseconds) [info] - encode/decode for Tuple2: (Seq[Seq[(Int, Int)]],List(List((1,2)))) (interpreted path) (39 milliseconds) [info] - encode/decode for tuple with 2 flat encoders: (1,10) (codegen path) (16 milliseconds) [info] - encode/decode for tuple with 2 flat encoders: (1,10) (interpreted path) (5 milliseconds) [info] - encode/decode for tuple with 2 product encoders: (PrimitiveData(1,1,1.0,1.0,1,1,true),(3,30)) (codegen path) (69 milliseconds) [info] - encode/decode for tuple with 2 product encoders: (PrimitiveData(1,1,1.0,1.0,1,1,true),(3,30)) (interpreted path) (22 milliseconds) [info] - encode/decode for tuple with flat encoder and product encoder: (PrimitiveData(1,1,1.0,1.0,1,1,true),3) (codegen path) (40 milliseconds) [info] - encode/decode for tuple with flat encoder and product encoder: (PrimitiveData(1,1,1.0,1.0,1,1,true),3) (interpreted path) (12 milliseconds) [info] - encode/decode for tuple with product encoder and flat encoder: (3,PrimitiveData(1,1,1.0,1.0,1,1,true)) (codegen path) (33 milliseconds) [info] - encode/decode for tuple with product encoder and flat encoder: (3,PrimitiveData(1,1,1.0,1.0,1,1,true)) (interpreted path) (13 milliseconds) [info] - encode/decode for nested tuple encoder: (1,(10,100)) (codegen path) (23 milliseconds) [info] - encode/decode for nested tuple encoder: (1,(10,100)) (interpreted path) (7 milliseconds) [info] - repartition (fewer partitions) (441 milliseconds) [info] - encode/decode for primitive value class: PrimitiveValueClass(42) (codegen path) (15 milliseconds) [info] - encode/decode for primitive value class: PrimitiveValueClass(42) (interpreted path) (5 milliseconds) [info] - encode/decode for reference value class: ReferenceValueClass(Container(1)) (codegen path) (21 milliseconds) [info] - encode/decode for reference value class: ReferenceValueClass(Container(1)) (interpreted path) (5 milliseconds) [info] - encode/decode for option of int: Some(31) (codegen path) (14 milliseconds) [info] - encode/decode for option of int: Some(31) (interpreted path) (3 milliseconds) [info] - encode/decode for empty option of int: None (codegen path) (4 milliseconds) [info] - encode/decode for empty option of int: None (interpreted path) (3 milliseconds) [info] - checkpoint without draining iterator - caching before checkpointing (9 seconds, 616 milliseconds) [info] - encode/decode for option of string: Some(abc) (codegen path) (17 milliseconds) [info] - encode/decode for option of string: Some(abc) (interpreted path) (3 milliseconds) [info] - encode/decode for empty option of string: None (codegen path) (4 milliseconds) [info] - encode/decode for empty option of string: None (interpreted path) (3 milliseconds) [info] - encode/decode for Tuple2: (UDT,org.apache.spark.sql.catalyst.encoders.ExamplePoint@691) (codegen path) (20 milliseconds) [info] - encode/decode for Tuple2: (UDT,org.apache.spark.sql.catalyst.encoders.ExamplePoint@691) (interpreted path) (6 milliseconds) [info] - nullable of encoder schema (codegen path) (43 milliseconds) [info] - nullable of encoder schema (interpreted path) (41 milliseconds) [info] - nullable of encoder serializer (codegen path) (7 milliseconds) [info] - nullable of encoder serializer (interpreted path) (6 milliseconds) [info] - null check for map key: String (codegen path) (21 milliseconds) [info] - null check for map key: String (interpreted path) (18 milliseconds) [info] - null check for map key: Integer (codegen path) (19 milliseconds) [info] - null check for map key: Integer (interpreted path) (18 milliseconds) [info] - groupByKey (291 milliseconds) [info] UnsafeRowConverterSuite: [info] - basic conversion with only primitive types with CODEGEN_ONLY (6 milliseconds) [info] - basic conversion with only primitive types with NO_CODEGEN (0 milliseconds) [info] - basic conversion with primitive, string and binary types with CODEGEN_ONLY (7 milliseconds) [info] - basic conversion with primitive, string and binary types with NO_CODEGEN (1 millisecond) [info] - basic conversion with primitive, string, date and timestamp types with CODEGEN_ONLY (8 milliseconds) [info] - basic conversion with primitive, string, date and timestamp types with NO_CODEGEN (0 milliseconds) [info] - null handling with CODEGEN_ONLY (10 milliseconds) [info] - null handling with NO_CODEGEN (2 milliseconds) [info] - NaN canonicalization with CODEGEN_ONLY (7 milliseconds) [info] - NaN canonicalization with NO_CODEGEN (1 millisecond) [info] - basic conversion with struct type with CODEGEN_ONLY (11 milliseconds) [info] - basic conversion with struct type with NO_CODEGEN (1 millisecond) [info] - basic conversion with array type with CODEGEN_ONLY (9 milliseconds) [info] - basic conversion with array type with NO_CODEGEN (1 millisecond) [info] - basic conversion with map type with CODEGEN_ONLY (11 milliseconds) [info] - basic conversion with map type with NO_CODEGEN (1 millisecond) [info] - basic conversion with struct and array with CODEGEN_ONLY (9 milliseconds) [info] - basic conversion with struct and array with NO_CODEGEN (1 millisecond) [info] - basic conversion with struct and map with CODEGEN_ONLY (10 milliseconds) [info] - basic conversion with struct and map with NO_CODEGEN (1 millisecond) [info] - basic conversion with array and map with CODEGEN_ONLY (13 milliseconds) [info] - basic conversion with array and map with NO_CODEGEN (1 millisecond) [info] FilterPushdownSuite: [info] - eliminate subqueries (4 milliseconds) [info] - simple push down (4 milliseconds) [info] - combine redundant filters (5 milliseconds) [info] - do not combine non-deterministic filters even if they are identical (4 milliseconds) [info] - SPARK-16164: Filter pushdown should keep the ordering in the logical plan (4 milliseconds) [info] - SPARK-16994: filter should not be pushed through limit (3 milliseconds) [info] - can't push without rewrite (6 milliseconds) [info] - nondeterministic: can always push down filter through project with deterministic field (9 milliseconds) [info] - nondeterministic: can't push down filter through project with nondeterministic field (3 milliseconds) [info] - nondeterministic: can't push down filter through aggregate with nondeterministic field (7 milliseconds) [info] - reduceByKey (237 milliseconds) [info] - nondeterministic: push down part of filter through aggregate with deterministic field (21 milliseconds) [info] - filters: combines filters (5 milliseconds) [info] - joins: push to either side (8 milliseconds) [info] - joins: push to one side (5 milliseconds) [info] - joins: do not push down non-deterministic filters into join condition (5 milliseconds) [info] - joins: push to one side after transformCondition (12 milliseconds) [info] - joins: rewrite filter to push to either side (10 milliseconds) [info] - joins: push down left semi join (11 milliseconds) [info] - joins: push down left outer join #1 (12 milliseconds) [info] - joins: push down right outer join #1 (14 milliseconds) [info] - joins: push down left outer join #2 (14 milliseconds) [info] - joins: push down right outer join #2 (14 milliseconds) [info] - joins: push down left outer join #3 (14 milliseconds) [info] - joins: push down right outer join #3 (12 milliseconds) [info] - joins: push down left outer join #4 (11 milliseconds) [info] - joins: push down right outer join #4 (10 milliseconds) [info] - joins: push down left outer join #5 (11 milliseconds) [info] - joins: push down right outer join #5 (13 milliseconds) [info] - joins: can't push down (5 milliseconds) [info] - joins: conjunctive predicates (8 milliseconds) [info] - joins: conjunctive predicates #2 (8 milliseconds) [info] - joins: conjunctive predicates #3 (14 milliseconds) [info] - joins: push down where clause into left anti join (8 milliseconds) [info] - joins: only push down join conditions to the right of a left anti join (8 milliseconds) [info] - joins: only push down join conditions to the right of an existence join (7 milliseconds) [info] - generate: predicate referenced no generated column (14 milliseconds) [info] - generate: non-deterministic predicate referenced no generated column (16 milliseconds) [info] - generate: part of conjuncts referenced generated column (10 milliseconds) [info] - generate: all conjuncts referenced generated column (4 milliseconds) [info] - reduce (302 milliseconds) [info] - aggregate: push down filter when filter on group by expression (9 milliseconds) [info] - aggregate: don't push down filter when filter not on group by expression (8 milliseconds) [info] - aggregate: push down filters partially which are subset of group by expressions (10 milliseconds) [info] - aggregate: push down filters with alias (14 milliseconds) [info] - aggregate: push down filters with literal (11 milliseconds) [info] - aggregate: don't push down filters that are nondeterministic (30 milliseconds) [info] - SPARK-17712: aggregate: don't push down filters that are data-independent (9 milliseconds) [info] - aggregate: don't push filters if the aggregate has no grouping expressions (10 milliseconds) [info] - broadcast hint (14 milliseconds) [info] - union (23 milliseconds) [info] - expand (21 milliseconds) [info] - predicate subquery: push down simple (18 milliseconds) [info] - predicate subquery: push down complex (17 milliseconds) [info] - SPARK-20094: don't push predicate with IN subquery into join condition (17 milliseconds) [info] - Window: predicate push down -- basic (29 milliseconds) [info] - Window: predicate push down -- predicates with compound predicate using only one column (13 milliseconds) [info] - Window: predicate push down -- multi window expressions with the same window spec (16 milliseconds) [info] - Window: predicate push down -- multi window specification - 1 (26 milliseconds) [info] - Window: predicate push down -- multi window specification - 2 (29 milliseconds) [info] - Window: predicate push down -- predicates with multiple partitioning columns (15 milliseconds) [info] - Window: predicate push down -- complex predicate with the same expressions !!! IGNORED !!! [info] - count (354 milliseconds) [info] - Window: no predicate push down -- predicates are not from partitioning keys (12 milliseconds) [info] - Window: no predicate push down -- partial compound partition key (13 milliseconds) [info] - monitor app using launcher library (11 seconds, 322 milliseconds) [info] - Window: no predicate push down -- complex predicates containing non partitioning columns (14 milliseconds) [info] - Window: no predicate push down -- complex predicate with different expressions (19 milliseconds) [info] - join condition pushdown: deterministic and non-deterministic (12 milliseconds) [info] - watermark pushdown: no pushdown on watermark attribute #1 (11 milliseconds) [info] - watermark pushdown: no pushdown for nondeterministic filter (8 milliseconds) [info] - watermark pushdown: full pushdown (5 milliseconds) [info] - watermark pushdown: no pushdown on watermark attribute #2 (6 milliseconds) [info] UnsafeArraySuite: [info] - countByValue (312 milliseconds) [info] - read array (245 milliseconds) [info] - from primitive array (1 millisecond) [info] - to primitive array (10 milliseconds) [info] - unsafe java serialization (1 millisecond) [info] - unsafe Kryo serialization (20 milliseconds) [info] DecimalAggregatesSuite: [info] - Decimal Sum Aggregation: Optimized (9 milliseconds) [info] - Decimal Sum Aggregation: Not Optimized (2 milliseconds) [info] - Decimal Average Aggregation: Optimized (5 milliseconds) [info] - Decimal Average Aggregation: Not Optimized (2 milliseconds) [info] - Decimal Sum Aggregation over Window: Optimized (8 milliseconds) [info] - Decimal Sum Aggregation over Window: Not Optimized (9 milliseconds) [info] - Decimal Average Aggregation over Window: Optimized (9 milliseconds) [info] - Decimal Average Aggregation over Window: Not Optimized (11 milliseconds) [info] DecimalExpressionSuite: [info] - UnscaledValue (76 milliseconds) [info] - MakeDecimal (53 milliseconds) [info] - mapValues (297 milliseconds) [info] - PromotePrecision (69 milliseconds) [info] - CheckOverflow (200 milliseconds) [info] TableIdentifierParserSuite: [info] - table identifier (19 milliseconds) [info] - quoted identifiers (5 milliseconds) [info] - flatMapValues (298 milliseconds) [info] - table identifier - strict keywords (44 milliseconds) [info] - table identifier - non reserved keywords (118 milliseconds) [info] - SPARK-17364 table identifier - contains number (2 milliseconds) [info] - SPARK-17832 table identifier - contains backtick (3 milliseconds) [info] GeneratedProjectionSuite: [info] - union (258 milliseconds) [info] - union with input stream return None (140 milliseconds) [info] - StreamingContext.union (309 milliseconds) [info] - transform (250 milliseconds) [info] - transform with NULL (103 milliseconds) [info] - transform with input stream return None (119 milliseconds) [info] - transformWith (350 milliseconds) [info] - transformWith with input stream return None (113 milliseconds) [info] - StreamingContext.transform (231 milliseconds) [info] - StreamingContext.transform with input stream return None (125 milliseconds) [info] - generated projections on wider table (2 seconds, 117 milliseconds) [info] - cogroup (301 milliseconds) [info] - join (309 milliseconds) [info] - leftOuterJoin (292 milliseconds) [info] - rightOuterJoin (312 milliseconds) [info] - fullOuterJoin (303 milliseconds) [info] - updateStateByKey (333 milliseconds) [info] - updateStateByKey - simple with initial value RDD (336 milliseconds) [info] - updateStateByKey - testing time stamps as input (335 milliseconds) [info] - updateStateByKey - with initial value RDD (331 milliseconds) [info] - updateStateByKey - object lifecycle (335 milliseconds) [info] - slice (2 seconds, 115 milliseconds) [info] - slice - has not been initialized (60 milliseconds) [info] - checkpoint without draining iterator - caching after checkpointing (9 seconds, 396 milliseconds) [info] - checkpoint blocks exist (17 milliseconds) [info] - checkpoint blocks exist - caching before checkpointing (16 milliseconds) [info] - rdd cleanup - map and window (327 milliseconds) [info] - checkpoint blocks exist - caching after checkpointing (17 milliseconds) [info]