GitHub pull request #31209 of commit 8e883180b310aa055beb40d11da30a973cb168f7, no merge conflicts. Running as SYSTEM Setting status of 8e883180b310aa055beb40d11da30a973cb168f7 to PENDING with url https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/134162/ and message: 'Build started for merge commit.' FileNotFoundException means that the credentials Jenkins is using is probably wrong. Or the user account does not have write access to the repo. org.kohsuke.github.GHFileNotFoundException: https://api.github.com/repos/apache/spark/statuses/8e883180b310aa055beb40d11da30a973cb168f7 {"message":"Not Found","documentation_url":"https://docs.github.com/rest/reference/repos#create-a-commit-status"} at org.kohsuke.github.GitHubClient.interpretApiError(GitHubClient.java:492) at org.kohsuke.github.GitHubClient.sendRequest(GitHubClient.java:420) at org.kohsuke.github.GitHubClient.sendRequest(GitHubClient.java:363) at org.kohsuke.github.Requester.fetch(Requester.java:74) at org.kohsuke.github.GHRepository.createCommitStatus(GHRepository.java:1906) at org.jenkinsci.plugins.ghprb.extensions.status.GhprbSimpleStatus.createCommitStatus(GhprbSimpleStatus.java:283) at org.jenkinsci.plugins.ghprb.extensions.status.GhprbSimpleStatus.onBuildStart(GhprbSimpleStatus.java:195) at org.jenkinsci.plugins.ghprb.GhprbBuilds.onStarted(GhprbBuilds.java:144) at org.jenkinsci.plugins.ghprb.GhprbBuildListener.onStarted(GhprbBuildListener.java:20) at hudson.model.listeners.RunListener.fireStarted(RunListener.java:238) at hudson.model.Run.execute(Run.java:1892) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43) at hudson.model.ResourceController.execute(ResourceController.java:97) at hudson.model.Executor.run(Executor.java:428) Caused by: java.io.FileNotFoundException: https://api.github.com/repos/apache/spark/statuses/8e883180b310aa055beb40d11da30a973cb168f7 at sun.reflect.GeneratedConstructorAccessor185.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at sun.net.www.protocol.http.HttpURLConnection$10.run(HttpURLConnection.java:1950) at sun.net.www.protocol.http.HttpURLConnection$10.run(HttpURLConnection.java:1945) at java.security.AccessController.doPrivileged(Native Method) at sun.net.www.protocol.http.HttpURLConnection.getChainedException(HttpURLConnection.java:1944) at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1514) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1498) at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:268) at org.kohsuke.github.GitHubHttpUrlConnectionClient$HttpURLConnectionResponseInfo.bodyStream(GitHubHttpUrlConnectionClient.java:197) at org.kohsuke.github.GitHubResponse$ResponseInfo.getBodyAsString(GitHubResponse.java:326) at org.kohsuke.github.GitHubResponse.parseBody(GitHubResponse.java:91) at org.kohsuke.github.Requester.lambda$fetch$1(Requester.java:74) at org.kohsuke.github.GitHubClient.createResponse(GitHubClient.java:461) at org.kohsuke.github.GitHubClient.sendRequest(GitHubClient.java:412) ... 12 more Caused by: java.io.FileNotFoundException: https://api.github.com/repos/apache/spark/statuses/8e883180b310aa055beb40d11da30a973cb168f7 at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1896) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1498) at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480) at sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:352) at org.kohsuke.github.GitHubHttpUrlConnectionClient.getResponseInfo(GitHubHttpUrlConnectionClient.java:69) at org.kohsuke.github.GitHubClient.sendRequest(GitHubClient.java:400) ... 12 more [EnvInject] - Loading node environment variables. Building remotely on research-jenkins-worker-09 (ubuntu ubuntu-gpu research-09 ubuntu-avx2) in workspace /home/jenkins/workspace/SparkPullRequestBuilder [WS-CLEANUP] Deleting project workspace... [WS-CLEANUP] Done The recommended git tool is: NONE No credentials specified Cloning the remote Git repository Cloning repository https://github.com/apache/spark.git > git init /home/jenkins/workspace/SparkPullRequestBuilder # timeout=10 Using reference repository: /home/jenkins/gitcaches/spark.reference Fetching upstream changes from https://github.com/apache/spark.git > git --version # timeout=10 > git --version # 'git version 2.7.4' > git fetch --tags --progress https://github.com/apache/spark.git +refs/heads/*:refs/remotes/origin/* # timeout=15 > git config remote.origin.url https://github.com/apache/spark.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 > git config remote.origin.url https://github.com/apache/spark.git # timeout=10 Fetching upstream changes from https://github.com/apache/spark.git > git fetch --tags --progress https://github.com/apache/spark.git +refs/pull/31209/*:refs/remotes/origin/pr/31209/* # timeout=15 > git rev-parse refs/remotes/origin/pr/31209/merge^{commit} # timeout=10 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://wiki.jenkins.io/display/JENKINS/Remove+Git+Plugin+BuildsByBranch+BuildData Checking out Revision 1c8e3887ce55c4f1937096940b9446282f8195c7 (refs/remotes/origin/pr/31209/merge) > git config core.sparsecheckout # timeout=10 > git checkout -f 1c8e3887ce55c4f1937096940b9446282f8195c7 # timeout=10 Commit message: "Merge 8e883180b310aa055beb40d11da30a973cb168f7 into 441fffa0bdb7e167797261455d345164829c716b" > git rev-list --no-walk efb9f41b233aafd3035fabfcc72650e24d8370d0 # timeout=10 First time build. Skipping changelog. [EnvInject] - Executing scripts and injecting environment variables after the SCM step. [EnvInject] - Mask passwords that will be passed as build parameters. [SparkPullRequestBuilder] $ /bin/bash /tmp/jenkins4470897890031612904.sh + export LANG=en_US.UTF-8 + LANG=en_US.UTF-8 + export AMPLAB_JENKINS=1 + AMPLAB_JENKINS=1 + export PATH=/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/jenkins/anaconda2/envs/py3k/bin + PATH=/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/jenkins/anaconda2/envs/py3k/bin + export PATH=/usr/lib/jvm/java-8-openjdk-amd64//bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/jenkins/anaconda2/envs/py3k/bin + PATH=/usr/lib/jvm/java-8-openjdk-amd64//bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/jenkins/anaconda2/envs/py3k/bin + export PATH=/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/usr/lib/jvm/java-8-openjdk-amd64//bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/jenkins/anaconda2/envs/py3k/bin + PATH=/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/usr/lib/jvm/java-8-openjdk-amd64//bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/jenkins/anaconda2/envs/py3k/bin + export HOME=/home/jenkins/sparkivy/per-executor-caches/0 + HOME=/home/jenkins/sparkivy/per-executor-caches/0 + mkdir -p /home/jenkins/sparkivy/per-executor-caches/0 + export 'SBT_OPTS=-Duser.home=/home/jenkins/sparkivy/per-executor-caches/0 -Dsbt.ivy.home=/home/jenkins/sparkivy/per-executor-caches/0/.ivy2' + SBT_OPTS='-Duser.home=/home/jenkins/sparkivy/per-executor-caches/0 -Dsbt.ivy.home=/home/jenkins/sparkivy/per-executor-caches/0/.ivy2' + export SPARK_VERSIONS_SUITE_IVY_PATH=/home/jenkins/sparkivy/per-executor-caches/0/.ivy2 + SPARK_VERSIONS_SUITE_IVY_PATH=/home/jenkins/sparkivy/per-executor-caches/0/.ivy2 + ./dev/run-tests-jenkins Attempting to post to GitHub... > Post successful. HEAD is now at 1c8e388... Merge 8e883180b310aa055beb40d11da30a973cb168f7 into 441fffa0bdb7e167797261455d345164829c716b HEAD is now at 1c8e388... Merge 8e883180b310aa055beb40d11da30a973cb168f7 into 441fffa0bdb7e167797261455d345164829c716b +++ dirname /home/jenkins/workspace/SparkPullRequestBuilder/R/install-dev.sh ++ cd /home/jenkins/workspace/SparkPullRequestBuilder/R ++ pwd + FWDIR=/home/jenkins/workspace/SparkPullRequestBuilder/R + LIB_DIR=/home/jenkins/workspace/SparkPullRequestBuilder/R/lib + mkdir -p /home/jenkins/workspace/SparkPullRequestBuilder/R/lib + pushd /home/jenkins/workspace/SparkPullRequestBuilder/R + . /home/jenkins/workspace/SparkPullRequestBuilder/R/find-r.sh ++ '[' -z '' ']' ++ '[' '!' -z '' ']' +++ command -v R ++ '[' '!' /usr/bin/R ']' ++++ which R +++ dirname /usr/bin/R ++ R_SCRIPT_PATH=/usr/bin ++ echo 'Using R_SCRIPT_PATH = /usr/bin' Using R_SCRIPT_PATH = /usr/bin + . /home/jenkins/workspace/SparkPullRequestBuilder/R/create-rd.sh ++ set -o pipefail ++ set -e ++++ dirname /home/jenkins/workspace/SparkPullRequestBuilder/R/create-rd.sh +++ cd /home/jenkins/workspace/SparkPullRequestBuilder/R +++ pwd ++ FWDIR=/home/jenkins/workspace/SparkPullRequestBuilder/R ++ pushd /home/jenkins/workspace/SparkPullRequestBuilder/R ++ . /home/jenkins/workspace/SparkPullRequestBuilder/R/find-r.sh +++ '[' -z /usr/bin ']' ++ /usr/bin/Rscript -e ' if(requireNamespace("devtools", quietly=TRUE)) { setwd("/home/jenkins/workspace/SparkPullRequestBuilder/R"); devtools::document(pkg="./pkg", roclets="rd") }' Updating SparkR documentation First time using roxygen2. Upgrading automatically... Loading SparkR Creating a new generic function for ‘as.data.frame’ in package ‘SparkR’ Creating a new generic function for ‘colnames’ in package ‘SparkR’ Creating a new generic function for ‘colnames<-’ in package ‘SparkR’ Creating a new generic function for ‘cov’ in package ‘SparkR’ Creating a new generic function for ‘drop’ in package ‘SparkR’ Creating a new generic function for ‘na.omit’ in package ‘SparkR’ Creating a new generic function for ‘filter’ in package ‘SparkR’ Creating a new generic function for ‘intersect’ in package ‘SparkR’ Creating a new generic function for ‘sample’ in package ‘SparkR’ Creating a new generic function for ‘transform’ in package ‘SparkR’ Creating a new generic function for ‘subset’ in package ‘SparkR’ Creating a new generic function for ‘summary’ in package ‘SparkR’ Creating a new generic function for ‘union’ in package ‘SparkR’ Creating a new generic function for ‘endsWith’ in package ‘SparkR’ Creating a new generic function for ‘startsWith’ in package ‘SparkR’ Creating a new generic function for ‘lag’ in package ‘SparkR’ Creating a new generic function for ‘rank’ in package ‘SparkR’ Creating a new generic function for ‘sd’ in package ‘SparkR’ Creating a new generic function for ‘var’ in package ‘SparkR’ Creating a new generic function for ‘window’ in package ‘SparkR’ Creating a new generic function for ‘predict’ in package ‘SparkR’ Creating a new generic function for ‘rbind’ in package ‘SparkR’ Creating a generic function for ‘substr’ from package ‘base’ in package ‘SparkR’ Creating a generic function for ‘%in%’ from package ‘base’ in package ‘SparkR’ Creating a generic function for ‘lapply’ from package ‘base’ in package ‘SparkR’ Creating a generic function for ‘Filter’ from package ‘base’ in package ‘SparkR’ Creating a generic function for ‘nrow’ from package ‘base’ in package ‘SparkR’ Creating a generic function for ‘ncol’ from package ‘base’ in package ‘SparkR’ Creating a generic function for ‘factorial’ from package ‘base’ in package ‘SparkR’ Creating a generic function for ‘atan2’ from package ‘base’ in package ‘SparkR’ Creating a generic function for ‘ifelse’ from package ‘base’ in package ‘SparkR’ Writing structType.Rd Writing print.structType.Rd Writing structField.Rd Writing print.structField.Rd Writing summarize.Rd Writing alias.Rd Writing arrange.Rd Writing as.data.frame.Rd Writing cache.Rd Writing checkpoint.Rd Writing coalesce.Rd Writing collect.Rd Writing columns.Rd Writing coltypes.Rd Writing count.Rd Writing cov.Rd Writing corr.Rd Writing createOrReplaceTempView.Rd Writing cube.Rd Writing dapply.Rd Writing dapplyCollect.Rd Writing gapply.Rd Writing gapplyCollect.Rd Writing describe.Rd Writing distinct.Rd Writing drop.Rd Writing dropDuplicates.Rd Writing nafunctions.Rd Writing dtypes.Rd Writing explain.Rd Writing except.Rd Writing exceptAll.Rd Writing filter.Rd Writing first.Rd Writing groupBy.Rd Writing hint.Rd Writing insertInto.Rd Writing intersect.Rd Writing intersectAll.Rd Writing isLocal.Rd Writing isStreaming.Rd Writing limit.Rd Writing localCheckpoint.Rd Writing merge.Rd Writing mutate.Rd Writing orderBy.Rd Writing persist.Rd Writing printSchema.Rd Writing registerTempTable-deprecated.Rd Writing rename.Rd Writing repartition.Rd Writing repartitionByRange.Rd Writing sample.Rd Writing rollup.Rd Writing sampleBy.Rd Writing saveAsTable.Rd Writing take.Rd Writing write.df.Rd Writing write.jdbc.Rd Writing write.json.Rd Writing write.orc.Rd Writing write.parquet.Rd Writing write.stream.Rd Writing write.text.Rd Writing schema.Rd Writing select.Rd Writing selectExpr.Rd Writing showDF.Rd Writing subset.Rd Writing summary.Rd Writing union.Rd Writing unionAll.Rd Writing unionByName.Rd Writing unpersist.Rd Writing with.Rd Writing withColumn.Rd Writing withWatermark.Rd Writing randomSplit.Rd Writing broadcast.Rd Writing columnfunctions.Rd Writing between.Rd Writing cast.Rd Writing endsWith.Rd Writing startsWith.Rd Writing column_nonaggregate_functions.Rd Writing otherwise.Rd Writing over.Rd Writing eq_null_safe.Rd Writing withField.Rd Writing dropFields.Rd Writing partitionBy.Rd Writing rowsBetween.Rd Writing rangeBetween.Rd Writing windowPartitionBy.Rd Writing windowOrderBy.Rd Writing column_datetime_diff_functions.Rd Writing column_aggregate_functions.Rd Writing column_collection_functions.Rd Writing column_ml_functions.Rd Writing column_string_functions.Rd Writing column_misc_functions.Rd Writing avg.Rd Writing column_math_functions.Rd Writing column.Rd Writing column_window_functions.Rd Writing column_datetime_functions.Rd Writing column_avro_functions.Rd Writing last.Rd Writing not.Rd Writing fitted.Rd Writing predict.Rd Writing rbind.Rd Writing spark.als.Rd Writing spark.bisectingKmeans.Rd Writing spark.fmClassifier.Rd Writing spark.fmRegressor.Rd Writing spark.gaussianMixture.Rd Writing spark.gbt.Rd Writing spark.glm.Rd Writing spark.isoreg.Rd Writing spark.kmeans.Rd Writing spark.kstest.Rd Writing spark.lda.Rd Writing spark.logit.Rd Writing spark.mlp.Rd Writing spark.naiveBayes.Rd Writing spark.decisionTree.Rd Writing spark.randomForest.Rd Writing spark.survreg.Rd Writing spark.svmLinear.Rd Writing spark.fpGrowth.Rd Writing spark.prefixSpan.Rd Writing spark.powerIterationClustering.Rd Writing spark.lm.Rd Writing write.ml.Rd Writing awaitTermination.Rd Writing isActive.Rd Writing lastProgress.Rd Writing queryName.Rd Writing status.Rd Writing stopQuery.Rd Writing print.jobj.Rd Writing show.Rd Writing substr.Rd Writing match.Rd Writing GroupedData.Rd Writing pivot.Rd Writing SparkDataFrame.Rd Writing storageLevel.Rd Writing toJSON.Rd Writing nrow.Rd Writing ncol.Rd Writing dim.Rd Writing head.Rd Writing join.Rd Writing crossJoin.Rd Writing attach.Rd Writing str.Rd Writing histogram.Rd Writing getNumPartitions.Rd Writing sparkR.conf.Rd Writing sparkR.version.Rd Writing createDataFrame.Rd Writing read.json.Rd Writing read.orc.Rd Writing read.parquet.Rd Writing read.text.Rd Writing sql.Rd Writing tableToDF.Rd Writing read.df.Rd Writing read.jdbc.Rd Writing read.stream.Rd Writing WindowSpec.Rd Writing createExternalTable-deprecated.Rd Writing createTable.Rd Writing cacheTable.Rd Writing uncacheTable.Rd Writing clearCache.Rd Writing dropTempTable-deprecated.Rd Writing dropTempView.Rd Writing tables.Rd Writing tableNames.Rd Writing currentDatabase.Rd Writing setCurrentDatabase.Rd Writing listDatabases.Rd Writing listTables.Rd Writing listColumns.Rd Writing listFunctions.Rd Writing recoverPartitions.Rd Writing refreshTable.Rd Writing refreshByPath.Rd Writing spark.addFile.Rd Writing spark.getSparkFilesRootDirectory.Rd Writing spark.getSparkFiles.Rd Writing spark.lapply.Rd Writing setLogLevel.Rd Writing setCheckpointDir.Rd Writing unresolved_named_lambda_var.Rd Writing create_lambda.Rd Writing invoke_higher_order_function.Rd Writing install.spark.Rd Writing sparkR.callJMethod.Rd Writing sparkR.callJStatic.Rd Writing sparkR.newJObject.Rd Writing LinearSVCModel-class.Rd Writing LogisticRegressionModel-class.Rd Writing MultilayerPerceptronClassificationModel-class.Rd Writing NaiveBayesModel-class.Rd Writing FMClassificationModel-class.Rd Writing BisectingKMeansModel-class.Rd Writing GaussianMixtureModel-class.Rd Writing KMeansModel-class.Rd Writing LDAModel-class.Rd Writing PowerIterationClustering-class.Rd Writing FPGrowthModel-class.Rd Writing PrefixSpan-class.Rd Writing ALSModel-class.Rd Writing AFTSurvivalRegressionModel-class.Rd Writing GeneralizedLinearRegressionModel-class.Rd Writing IsotonicRegressionModel-class.Rd Writing LinearRegressionModel-class.Rd Writing FMRegressionModel-class.Rd Writing glm.Rd Writing KSTest-class.Rd Writing GBTRegressionModel-class.Rd Writing GBTClassificationModel-class.Rd Writing RandomForestRegressionModel-class.Rd Writing RandomForestClassificationModel-class.Rd Writing DecisionTreeRegressionModel-class.Rd Writing DecisionTreeClassificationModel-class.Rd Writing read.ml.Rd Writing sparkR.session.stop.Rd Writing sparkR.init-deprecated.Rd Writing sparkRSQL.init-deprecated.Rd Writing sparkRHive.init-deprecated.Rd Writing sparkR.session.Rd Writing sparkR.uiWebUrl.Rd Writing setJobGroup.Rd Writing clearJobGroup.Rd Writing cancelJobGroup.Rd Writing setJobDescription.Rd Writing setLocalProperty.Rd Writing getLocalProperty.Rd Writing crosstab.Rd Writing freqItems.Rd Writing approxQuantile.Rd Writing StreamingQuery.Rd Writing hashCode.Rd + /usr/bin/R CMD INSTALL --library=/home/jenkins/workspace/SparkPullRequestBuilder/R/lib /home/jenkins/workspace/SparkPullRequestBuilder/R/pkg/ * installing *source* package ‘SparkR’ ... ** using staged installation ** R ** inst ** byte-compile and prepare package for lazy loading Creating a new generic function for ‘as.data.frame’ in package ‘SparkR’ Creating a new generic function for ‘colnames’ in package ‘SparkR’ Creating a new generic function for ‘colnames<-’ in package ‘SparkR’ Creating a new generic function for ‘cov’ in package ‘SparkR’ Creating a new generic function for ‘drop’ in package ‘SparkR’ Creating a new generic function for ‘na.omit’ in package ‘SparkR’ Creating a new generic function for ‘filter’ in package ‘SparkR’ Creating a new generic function for ‘intersect’ in package ‘SparkR’ Creating a new generic function for ‘sample’ in package ‘SparkR’ Creating a new generic function for ‘transform’ in package ‘SparkR’ Creating a new generic function for ‘subset’ in package ‘SparkR’ Creating a new generic function for ‘summary’ in package ‘SparkR’ Creating a new generic function for ‘union’ in package ‘SparkR’ Creating a new generic function for ‘endsWith’ in package ‘SparkR’ Creating a new generic function for ‘startsWith’ in package ‘SparkR’ Creating a new generic function for ‘lag’ in package ‘SparkR’ Creating a new generic function for ‘rank’ in package ‘SparkR’ Creating a new generic function for ‘sd’ in package ‘SparkR’ Creating a new generic function for ‘var’ in package ‘SparkR’ Creating a new generic function for ‘window’ in package ‘SparkR’ Creating a new generic function for ‘predict’ in package ‘SparkR’ Creating a new generic function for ‘rbind’ in package ‘SparkR’ Creating a generic function for ‘substr’ from package ‘base’ in package ‘SparkR’ Creating a generic function for ‘%in%’ from package ‘base’ in package ‘SparkR’ Creating a generic function for ‘lapply’ from package ‘base’ in package ‘SparkR’ Creating a generic function for ‘Filter’ from package ‘base’ in package ‘SparkR’ Creating a generic function for ‘nrow’ from package ‘base’ in package ‘SparkR’ Creating a generic function for ‘ncol’ from package ‘base’ in package ‘SparkR’ Creating a generic function for ‘factorial’ from package ‘base’ in package ‘SparkR’ Creating a generic function for ‘atan2’ from package ‘base’ in package ‘SparkR’ Creating a generic function for ‘ifelse’ from package ‘base’ in package ‘SparkR’ ** help *** installing help indices ** building package indices ** installing vignettes ** testing if installed package can be loaded from temporary location ** testing if installed package can be loaded from final location ** testing if installed package keeps a record of temporary installation path * DONE (SparkR) + cd /home/jenkins/workspace/SparkPullRequestBuilder/R/lib + jar cfM /home/jenkins/workspace/SparkPullRequestBuilder/R/lib/sparkr.zip SparkR + popd [info] Using build tool sbt with Hadoop profile hadoop3.2 and Hive profile hive2.3 under environment amplab_jenkins From https://github.com/apache/spark * [new branch] master -> master [info] Found the following changed modules: sql, catalyst [info] Setup the following environment variables for tests: ======================================================================== Running Apache RAT checks ======================================================================== Attempting to fetch rat RAT checks passed. ======================================================================== Running Scala style checks ======================================================================== [info] Checking Scala style using SBT with these profiles: -Phadoop-3.2 -Phive-2.3 -Phive-thriftserver -Pkubernetes -Pmesos -Pspark-ganglia-lgpl -Pkinesis-asl -Phive -Pyarn -Phadoop-cloud Scalastyle checks passed. ======================================================================== Building Spark ======================================================================== [info] Building Spark using SBT with these arguments: -Phadoop-3.2 -Phive-2.3 -Phive-thriftserver -Pkubernetes -Pmesos -Pspark-ganglia-lgpl -Pkinesis-asl -Phive -Pyarn -Phadoop-cloud test:package streaming-kinesis-asl-assembly/assembly Using /usr/lib/jvm/java-8-openjdk-amd64/ as default JAVA_HOME. Note, this will be overridden by -java-home if it is set. [info] welcome to sbt 1.4.6 (Private Build Java 1.8.0_222) [info] loading settings for project sparkpullrequestbuilder-build from plugins.sbt ... [info] loading project definition from /home/jenkins/workspace/SparkPullRequestBuilder/project [info] resolving key references (36224 settings) ... [info] set current project to spark-parent (in build file:/home/jenkins/workspace/SparkPullRequestBuilder/) [warn] there are 204 keys that are not used by any other settings/tasks: [warn] [warn] * assembly / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * assembly / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * assembly / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * assembly / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * assembly / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * assembly / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * avro / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * avro / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * avro / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * avro / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * avro / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * avro / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * catalyst / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * catalyst / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * catalyst / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * catalyst / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * catalyst / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * catalyst / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * core / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * core / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * core / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * core / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * core / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * core / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * examples / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * examples / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * examples / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * examples / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * examples / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * examples / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * ganglia-lgpl / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * ganglia-lgpl / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * ganglia-lgpl / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * ganglia-lgpl / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * ganglia-lgpl / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * ganglia-lgpl / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * graphx / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * graphx / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * graphx / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * graphx / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * graphx / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * graphx / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * hadoop-cloud / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * hadoop-cloud / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * hadoop-cloud / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * hadoop-cloud / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * hadoop-cloud / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * hadoop-cloud / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * hive / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * hive / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * hive / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * hive / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * hive / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * hive / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * hive-thriftserver / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * hive-thriftserver / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * hive-thriftserver / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * hive-thriftserver / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * hive-thriftserver / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * hive-thriftserver / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * kubernetes / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * kubernetes / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * kubernetes / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * kubernetes / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * kubernetes / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * kubernetes / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * kvstore / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * kvstore / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * kvstore / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * kvstore / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * kvstore / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * kvstore / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * launcher / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * launcher / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * launcher / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * launcher / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * launcher / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * launcher / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * mesos / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * mesos / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * mesos / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * mesos / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * mesos / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * mesos / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * mllib / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * mllib / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * mllib / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * mllib / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * mllib / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * mllib / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * mllib-local / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * mllib-local / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * mllib-local / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * mllib-local / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * mllib-local / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * mllib-local / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * network-common / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * network-common / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * network-common / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * network-common / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * network-common / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * network-common / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * network-shuffle / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * network-shuffle / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * network-shuffle / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * network-shuffle / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * network-shuffle / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * network-shuffle / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * network-yarn / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * network-yarn / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * network-yarn / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * network-yarn / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * network-yarn / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * network-yarn / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * repl / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * repl / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * repl / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * repl / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * repl / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * repl / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * sketch / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * sketch / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * sketch / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * sketch / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * sketch / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * sketch / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * spark / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * spark / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * spark / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * spark / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * spark / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * spark / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * sql / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * sql / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * sql / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * sql / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * sql / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * sql / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * sql-kafka-0-10 / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * sql-kafka-0-10 / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * sql-kafka-0-10 / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * sql-kafka-0-10 / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * sql-kafka-0-10 / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * sql-kafka-0-10 / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * streaming / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * streaming / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * streaming / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * streaming / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * streaming / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * streaming / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * streaming-kafka-0-10 / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * streaming-kafka-0-10 / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * streaming-kafka-0-10 / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * streaming-kafka-0-10 / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * streaming-kafka-0-10 / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * streaming-kafka-0-10 / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * streaming-kafka-0-10-assembly / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * streaming-kafka-0-10-assembly / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * streaming-kafka-0-10-assembly / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * streaming-kafka-0-10-assembly / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * streaming-kafka-0-10-assembly / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * streaming-kafka-0-10-assembly / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * streaming-kinesis-asl / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * streaming-kinesis-asl / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * streaming-kinesis-asl / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * streaming-kinesis-asl / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * streaming-kinesis-asl / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * streaming-kinesis-asl / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * streaming-kinesis-asl-assembly / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * streaming-kinesis-asl-assembly / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * streaming-kinesis-asl-assembly / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * streaming-kinesis-asl-assembly / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * streaming-kinesis-asl-assembly / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * streaming-kinesis-asl-assembly / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * tags / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * tags / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * tags / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * tags / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * tags / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * tags / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * token-provider-kafka-0-10 / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * token-provider-kafka-0-10 / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * token-provider-kafka-0-10 / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * token-provider-kafka-0-10 / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * token-provider-kafka-0-10 / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * token-provider-kafka-0-10 / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * tools / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * tools / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * tools / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * tools / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * tools / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * tools / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * unsafe / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * unsafe / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * unsafe / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * unsafe / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * unsafe / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * unsafe / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * yarn / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * yarn / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * yarn / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * yarn / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * yarn / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * yarn / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] [warn] note: a setting might still be used by a command; to exclude a key from this `lintUnused` check [warn] either append it to `Global / excludeLintKeys` or call .withRank(KeyRanks.Invisible) on the key [warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings. [info] compiling 2 Scala sources and 8 Java sources to /home/jenkins/workspace/SparkPullRequestBuilder/common/tags/target/scala-2.12/classes ... [info] compiling 1 Scala source to /home/jenkins/workspace/SparkPullRequestBuilder/tools/target/scala-2.12/classes ... [info] compiling 79 Java sources to /home/jenkins/workspace/SparkPullRequestBuilder/common/network-common/target/scala-2.12/classes ... [warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings. [info] done compiling [info] done compiling [info] compiling 9 Java sources to /home/jenkins/workspace/SparkPullRequestBuilder/common/sketch/target/scala-2.12/classes ... [info] compiling 12 Java sources to /home/jenkins/workspace/SparkPullRequestBuilder/common/kvstore/target/scala-2.12/classes ... [info] compiling 20 Java sources to /home/jenkins/workspace/SparkPullRequestBuilder/launcher/target/scala-2.12/classes ... [info] compiling 18 Java sources to /home/jenkins/workspace/SparkPullRequestBuilder/common/unsafe/target/scala-2.12/classes ... [info] compiling 6 Java sources to /home/jenkins/workspace/SparkPullRequestBuilder/common/tags/target/scala-2.12/test-classes ... [info] compiling 5 Scala sources to /home/jenkins/workspace/SparkPullRequestBuilder/mllib-local/target/scala-2.12/classes ... [info] compiling 39 Java sources to /home/jenkins/workspace/SparkPullRequestBuilder/common/network-shuffle/target/scala-2.12/classes ... [info] done compiling [info] done compiling [info] compiling 24 Java sources to /home/jenkins/workspace/SparkPullRequestBuilder/common/network-common/target/scala-2.12/test-classes ... [warn] /home/jenkins/workspace/SparkPullRequestBuilder/common/sketch/src/main/java/org/apache/spark/util/sketch/Platform.java:22:1: Unsafe is internal proprietary API and may be removed in a future release [warn] import sun.misc.Unsafe; [warn] ^ [warn] /home/jenkins/workspace/SparkPullRequestBuilder/common/sketch/src/main/java/org/apache/spark/util/sketch/Platform.java:28:1: Unsafe is internal proprietary API and may be removed in a future release [warn] private static final Unsafe _UNSAFE; [warn] ^ [warn] /home/jenkins/workspace/SparkPullRequestBuilder/common/sketch/src/main/java/org/apache/spark/util/sketch/Platform.java:150:1: Unsafe is internal proprietary API and may be removed in a future release [warn] sun.misc.Unsafe unsafe; [warn] ^ [warn] /home/jenkins/workspace/SparkPullRequestBuilder/common/sketch/src/main/java/org/apache/spark/util/sketch/Platform.java:152:1: Unsafe is internal proprietary API and may be removed in a future release [warn] Field unsafeField = Unsafe.class.getDeclaredField("theUnsafe"); [warn] ^ [warn] /home/jenkins/workspace/SparkPullRequestBuilder/common/sketch/src/main/java/org/apache/spark/util/sketch/Platform.java:154:1: Unsafe is internal proprietary API and may be removed in a future release [warn] unsafe = (sun.misc.Unsafe) unsafeField.get(null); [warn] ^5 warnings [info] done compiling [info] compiling 3 Scala sources to /home/jenkins/workspace/SparkPullRequestBuilder/common/sketch/target/scala-2.12/test-classes ... [info] done compiling [info] done compiling [info] done compiling [info] compiling 7 Java sources to /home/jenkins/workspace/SparkPullRequestBuilder/launcher/target/scala-2.12/test-classes ... [info] compiling 1 Scala source and 5 Java sources to /home/jenkins/workspace/SparkPullRequestBuilder/common/unsafe/target/scala-2.12/test-classes ... [info] compiling 11 Java sources to /home/jenkins/workspace/SparkPullRequestBuilder/common/kvstore/target/scala-2.12/test-classes ... [info] done compiling [info] compiling 3 Java sources to /home/jenkins/workspace/SparkPullRequestBuilder/common/network-yarn/target/scala-2.12/classes ... [info] compiling 561 Scala sources and 99 Java sources to /home/jenkins/workspace/SparkPullRequestBuilder/core/target/scala-2.12/classes ... [warn] /home/jenkins/workspace/SparkPullRequestBuilder/common/network-common/src/test/java/org/apache/spark/network/server/OneForOneStreamManagerSuite.java:105:1: [unchecked] unchecked conversion [warn] Iterator buffers = Mockito.mock(Iterator.class); [warn] ^ required: Iterator [warn] found: Iterator [warn] /home/jenkins/workspace/SparkPullRequestBuilder/common/network-common/src/test/java/org/apache/spark/network/server/OneForOneStreamManagerSuite.java:111:1: [unchecked] unchecked conversion [warn] Iterator buffers2 = Mockito.mock(Iterator.class); [warn] ^ required: Iterator [warn] found: Iterator [warn] Note: Some input files use or override a deprecated API. [warn] Note: Recompile with -Xlint:deprecation for details. [warn] 2 warnings [info] Note: /home/jenkins/workspace/SparkPullRequestBuilder/launcher/src/test/java/org/apache/spark/launcher/SparkSubmitCommandBuilderSuite.java uses or overrides a deprecated API. [info] Note: Recompile with -Xlint:deprecation for details. [info] done compiling [info] done compiling [info] done compiling [info] compiling 16 Java sources to /home/jenkins/workspace/SparkPullRequestBuilder/common/network-shuffle/target/scala-2.12/test-classes ... [info] done compiling [info] done compiling [info] done compiling [info] done compiling [info] done compiling [info] compiling 10 Scala sources to /home/jenkins/workspace/SparkPullRequestBuilder/mllib-local/target/scala-2.12/test-classes ... [info] done compiling [warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings. [warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings. [warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings. [warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings. [warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings. [warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings. [warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings. [warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings. [info] Note: /home/jenkins/workspace/SparkPullRequestBuilder/core/src/main/java/org/apache/spark/SparkFirehoseListener.java uses or overrides a deprecated API. [info] Note: Recompile with -Xlint:deprecation for details. [info] done compiling [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [info] compiling 1 Scala source and 1 Java source to /home/jenkins/workspace/SparkPullRequestBuilder/external/spark-ganglia-lgpl/target/scala-2.12/classes ... [info] compiling 41 Scala sources to /home/jenkins/workspace/SparkPullRequestBuilder/resource-managers/kubernetes/core/target/scala-2.12/classes ... [info] compiling 104 Scala sources and 6 Java sources to /home/jenkins/workspace/SparkPullRequestBuilder/streaming/target/scala-2.12/classes ... [info] compiling 20 Scala sources and 1 Java source to /home/jenkins/workspace/SparkPullRequestBuilder/resource-managers/mesos/target/scala-2.12/classes ... [info] compiling 38 Scala sources and 5 Java sources to /home/jenkins/workspace/SparkPullRequestBuilder/graphx/target/scala-2.12/classes ... [info] compiling 329 Scala sources and 112 Java sources to /home/jenkins/workspace/SparkPullRequestBuilder/sql/catalyst/target/scala-2.12/classes ... [info] compiling 25 Scala sources to /home/jenkins/workspace/SparkPullRequestBuilder/resource-managers/yarn/target/scala-2.12/classes ... [info] compiling 5 Scala sources to /home/jenkins/workspace/SparkPullRequestBuilder/external/kafka-0-10-token-provider/target/scala-2.12/classes ... [info] compiling 303 Scala sources and 27 Java sources to /home/jenkins/workspace/SparkPullRequestBuilder/core/target/scala-2.12/test-classes ... [info] done compiling [info] done compiling [info] done compiling [info] done compiling [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [info] done compiling [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [info] done compiling [info] done compiling [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [info] compiling 11 Scala sources and 2 Java sources to /home/jenkins/workspace/SparkPullRequestBuilder/external/kinesis-asl/target/scala-2.12/classes ... [info] compiling 10 Scala sources and 1 Java source to /home/jenkins/workspace/SparkPullRequestBuilder/external/kafka-0-10/target/scala-2.12/classes ... [info] done compiling [warn] /home/jenkins/workspace/SparkPullRequestBuilder/external/kinesis-asl/src/main/java/org/apache/spark/examples/streaming/JavaKinesisWordCountASL.java:157:1: [unchecked] unchecked method invocation: method union in class JavaStreamingContext is applied to given types [warn] unionStreams = jssc.union(streamsList.toArray(new JavaDStream[0])); [warn] ^ required: JavaDStream[] [warn] found: JavaDStream[] [warn] where T is a type-variable: [warn] T extends Object declared in method union(JavaDStream...) [warn] /home/jenkins/workspace/SparkPullRequestBuilder/external/kinesis-asl/src/main/java/org/apache/spark/examples/streaming/JavaKinesisWordCountASL.java:157:1: [unchecked] unchecked conversion [warn] unionStreams = jssc.union(streamsList.toArray(new JavaDStream[0])); [warn] ^ required: JavaDStream[] [warn] found: JavaDStream[] [warn] where T is a type-variable: [warn] T extends Object declared in method union(JavaDStream...) [info] done compiling [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [warn] /home/jenkins/workspace/SparkPullRequestBuilder/sql/catalyst/src/main/java/org/apache/spark/sql/connector/catalog/SupportsAtomicPartitionManagement.java:52:1: [unchecked] unchecked method invocation: method createPartitions in interface SupportsAtomicPartitionManagement is applied to given types [warn] createPartitions(new InternalRow[]{ident}, new Map[]{properties}); [warn] ^ required: InternalRow[],Map[] [warn] found: InternalRow[],Map[] [warn] /home/jenkins/workspace/SparkPullRequestBuilder/sql/catalyst/src/main/java/org/apache/spark/sql/connector/catalog/SupportsAtomicPartitionManagement.java:52:1: [unchecked] unchecked conversion [warn] createPartitions(new InternalRow[]{ident}, new Map[]{properties}); [warn] ^ required: Map[] [warn] found: Map[] [warn] 2 warnings [info] Note: Some input files use or override a deprecated API. [info] Note: Recompile with -Xlint:deprecation for details. [info] done compiling [info] done compiling [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [info] compiling 6 Scala sources to /home/jenkins/workspace/SparkPullRequestBuilder/external/kafka-0-10-token-provider/target/scala-2.12/test-classes ... [info] compiling 19 Scala sources to /home/jenkins/workspace/SparkPullRequestBuilder/graphx/target/scala-2.12/test-classes ... [info] compiling 41 Scala sources and 9 Java sources to /home/jenkins/workspace/SparkPullRequestBuilder/streaming/target/scala-2.12/test-classes ... [info] compiling 35 Scala sources to /home/jenkins/workspace/SparkPullRequestBuilder/resource-managers/kubernetes/core/target/scala-2.12/test-classes ... [info] compiling 11 Scala sources to /home/jenkins/workspace/SparkPullRequestBuilder/resource-managers/mesos/target/scala-2.12/test-classes ... [info] compiling 21 Scala sources and 3 Java sources to /home/jenkins/workspace/SparkPullRequestBuilder/resource-managers/yarn/target/scala-2.12/test-classes ... [info] done compiling [info] compiling 278 Scala sources and 6 Java sources to /home/jenkins/workspace/SparkPullRequestBuilder/sql/catalyst/target/scala-2.12/test-classes ... [info] compiling 495 Scala sources and 59 Java sources to /home/jenkins/workspace/SparkPullRequestBuilder/sql/core/target/scala-2.12/classes ... [info] done compiling [info] done compiling [info] done compiling [info] done compiling [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [info] done compiling [info] compiling 6 Scala sources and 4 Java sources to /home/jenkins/workspace/SparkPullRequestBuilder/external/kafka-0-10/target/scala-2.12/test-classes ... [info] compiling 8 Scala sources and 1 Java source to /home/jenkins/workspace/SparkPullRequestBuilder/external/kinesis-asl/target/scala-2.12/test-classes ... [info] Note: /home/jenkins/workspace/SparkPullRequestBuilder/external/kinesis-asl/src/test/java/org/apache/spark/streaming/kinesis/JavaKinesisInputDStreamBuilderSuite.java uses or overrides a deprecated API. [info] Note: Recompile with -Xlint:deprecation for details. [info] done compiling [info] done compiling [info] Note: /home/jenkins/workspace/SparkPullRequestBuilder/sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java uses or overrides a deprecated API. [info] Note: Recompile with -Xlint:deprecation for details. [info] done compiling [info] done compiling [info] compiling 29 Scala sources and 2 Java sources to /home/jenkins/workspace/SparkPullRequestBuilder/sql/hive/target/scala-2.12/classes ... [info] compiling 30 Scala sources and 1 Java source to /home/jenkins/workspace/SparkPullRequestBuilder/external/kafka-0-10-sql/target/scala-2.12/classes ... [info] compiling 4 Scala sources to /home/jenkins/workspace/SparkPullRequestBuilder/repl/target/scala-2.12/classes ... [info] compiling 2 Scala sources to /home/jenkins/workspace/SparkPullRequestBuilder/hadoop-cloud/target/scala-2.12/classes ... [info] compiling 324 Scala sources and 5 Java sources to /home/jenkins/workspace/SparkPullRequestBuilder/mllib/target/scala-2.12/classes ... [info] compiling 18 Scala sources and 1 Java source to /home/jenkins/workspace/SparkPullRequestBuilder/external/avro/target/scala-2.12/classes ... [info] done compiling [info] compiling 2 Scala sources to /home/jenkins/workspace/SparkPullRequestBuilder/hadoop-cloud/target/scala-2.12/test-classes ... [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [info] compiling 441 Scala sources and 40 Java sources to /home/jenkins/workspace/SparkPullRequestBuilder/sql/core/target/scala-2.12/test-classes ... [info] done compiling [info] done compiling [warn] /home/jenkins/workspace/SparkPullRequestBuilder/external/avro/src/main/java/org/apache/spark/sql/avro/SparkAvroKeyOutputFormat.java:55:1: [unchecked] unchecked call to SparkAvroKeyRecordWriter(Schema,GenericData,CodecFactory,OutputStream,int,Map) as a member of the raw type SparkAvroKeyRecordWriter [warn] return new SparkAvroKeyRecordWriter( [warn] ^ [warn] /home/jenkins/workspace/SparkPullRequestBuilder/external/avro/src/main/java/org/apache/spark/sql/avro/SparkAvroKeyOutputFormat.java:74:1: [unchecked] unchecked call to DataFileWriter(DatumWriter) as a member of the raw type DataFileWriter [warn] this.mAvroFileWriter = new DataFileWriter(dataModel.createDatumWriter(writerSchema)); [warn] ^ where D is a type-variable: [warn] D extends Object declared in class DataFileWriter [info] done compiling [info] done compiling [info] Note: /home/jenkins/workspace/SparkPullRequestBuilder/sql/hive/src/main/java/org/apache/hadoop/hive/ql/io/orc/SparkOrcNewRecordReader.java uses or overrides a deprecated API. [info] Note: Recompile with -Xlint:deprecation for details. [info] done compiling [info] compiling 26 Scala sources and 86 Java sources to /home/jenkins/workspace/SparkPullRequestBuilder/sql/hive-thriftserver/target/scala-2.12/classes ... [info] Note: Some input files use or override a deprecated API. [info] Note: Recompile with -Xlint:deprecation for details. [info] done compiling [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [info] done compiling [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [info] compiling 197 Scala sources and 134 Java sources to /home/jenkins/workspace/SparkPullRequestBuilder/examples/target/scala-2.12/classes ... [info] compiling 5 Scala sources to /home/jenkins/workspace/SparkPullRequestBuilder/repl/target/scala-2.12/test-classes ... [info] done compiling [info] Note: /home/jenkins/workspace/SparkPullRequestBuilder/examples/src/main/java/org/apache/spark/examples/ml/JavaChiSqSelectorExample.java uses or overrides a deprecated API. [info] Note: Recompile with -Xlint:deprecation for details. [info] done compiling [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [info] Note: Some input files use or override a deprecated API. [info] Note: Recompile with -Xlint:deprecation for details. [info] done compiling [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [info] compiling 8 Scala sources and 1 Java source to /home/jenkins/workspace/SparkPullRequestBuilder/external/avro/target/scala-2.12/test-classes ... [info] compiling 21 Scala sources to /home/jenkins/workspace/SparkPullRequestBuilder/external/kafka-0-10-sql/target/scala-2.12/test-classes ... [info] compiling 110 Scala sources and 17 Java sources to /home/jenkins/workspace/SparkPullRequestBuilder/sql/hive/target/scala-2.12/test-classes ... [info] compiling 204 Scala sources and 66 Java sources to /home/jenkins/workspace/SparkPullRequestBuilder/mllib/target/scala-2.12/test-classes ... [info] done compiling [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [info] done compiling [warn] /home/jenkins/workspace/SparkPullRequestBuilder/sql/hive/src/test/java/org/apache/spark/sql/hive/test/Complex.java:464:1: [unchecked] unchecked cast [warn] setLint((List)value); [warn] ^ required: List [warn] found: Object [warn] /home/jenkins/workspace/SparkPullRequestBuilder/sql/hive/src/test/java/org/apache/spark/sql/hive/test/Complex.java:472:1: [unchecked] unchecked cast [warn] setLString((List)value); [warn] ^ required: List [warn] found: Object [warn] /home/jenkins/workspace/SparkPullRequestBuilder/sql/hive/src/test/java/org/apache/spark/sql/hive/test/Complex.java:480:1: [unchecked] unchecked cast [warn] setLintString((List)value); [warn] ^ required: List [warn] found: Object [warn] /home/jenkins/workspace/SparkPullRequestBuilder/sql/hive/src/test/java/org/apache/spark/sql/hive/test/Complex.java:488:1: [unchecked] unchecked cast [warn] setMStringString((Map)value); [warn] ^ required: Map [warn] found: Object [warn] /home/jenkins/workspace/SparkPullRequestBuilder/sql/hive/src/test/java/org/apache/spark/sql/hive/test/Complex.java:749:1: [unchecked] unchecked call to read(TProtocol,T) as a member of the raw type IScheme [warn] schemes.get(iprot.getScheme()).getScheme().read(iprot, this); [warn] ^ where T is a type-variable: [warn] T extends TBase declared in interface IScheme [warn] /home/jenkins/workspace/SparkPullRequestBuilder/sql/hive/src/test/java/org/apache/spark/sql/hive/test/Complex.java:753:1: [unchecked] unchecked call to write(TProtocol,T) as a member of the raw type IScheme [warn] schemes.get(oprot.getScheme()).getScheme().write(oprot, this); [warn] ^ where T is a type-variable: [warn] T extends TBase declared in interface IScheme [warn] /home/jenkins/workspace/SparkPullRequestBuilder/sql/hive/src/test/java/org/apache/spark/sql/hive/test/Complex.java:1027:1: [unchecked] getScheme() in ComplexTupleSchemeFactory implements getScheme() in SchemeFactory [warn] public ComplexTupleScheme getScheme() { [warn] ^ return type requires unchecked conversion from ComplexTupleScheme to S [warn] where S is a type-variable: [warn] S extends IScheme declared in method getScheme() [warn] Note: /home/jenkins/workspace/SparkPullRequestBuilder/sql/hive/src/test/java/org/apache/spark/sql/hive/JavaDataFrameSuite.java uses or overrides a deprecated API. [warn] Note: Recompile with -Xlint:deprecation for details. [warn] 8 warnings [info] done compiling [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [info] compiling 18 Scala sources to /home/jenkins/workspace/SparkPullRequestBuilder/sql/hive-thriftserver/target/scala-2.12/test-classes ... [info] done compiling [info] done compiling [success] Total time: 314 s (05:14), completed Jan 17, 2021 9:09:41 AM [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [info] Strategy 'discard' was applied to 2 files (Run the task at debug level to see details) [info] Strategy 'filterDistinctLines' was applied to 5 files (Run the task at debug level to see details) [info] Strategy 'first' was applied to 1632 files (Run the task at debug level to see details) [success] Total time: 21 s, completed Jan 17, 2021 9:10:03 AM ======================================================================== Detecting binary incompatibilities with MiMa ======================================================================== [info] Detecting binary incompatibilities with MiMa using SBT with these profiles: -Phadoop-3.2 -Phive-2.3 -Phive-thriftserver -Pkubernetes -Pmesos -Pspark-ganglia-lgpl -Pkinesis-asl -Phive -Pyarn -Phadoop-cloud [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.rules.RuleExecutor.Strategy [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NamedQueryContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.InBlock [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.TrainValidationSplit.TrainValidationSplitReader [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.LocalLDAModel.SaveLoadV1_0.$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.launcher.Main.MainClassOptionParser [WARN] Unable to detect inner functions for class:org.apache.spark.ml.util.DefaultParamsReader.Metadata [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.OpenHashMapBasedStateMap.StateInfo [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.dynalloc.ExecutorMonitor.ShuffleCleanedEvent [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.FMRegressionModel.FMRegressionModelWriter.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.rpc.netty.RpcEndpointVerifier.CheckExistence [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SetupDriver Error instrumenting class:org.apache.spark.mapred.SparkHadoopMapRedUtil$ [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.MultilayerPerceptronClassifierWrapper.MultilayerPerceptronClassifierWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.v2.V1FallbackWriters.toV1WriteBuilder [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FromStatementBodyContext [WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherProtocol.Hello [WARN] Unable to detect inner functions for class:org.apache.spark.security.CryptoStreamUtils.CryptoHelperChannel [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ImputerModel.ImputerModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.ResolveSessionCatalog.SessionCatalogAndTable [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IdentifierCommentListContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.LegacyDecimalLiteralContext [WARN] Unable to detect inner functions for class:org.apache.spark.util.SignalUtils.ActionHandler [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleMultipartIdentifierContext [WARN] Unable to detect inner functions for class:org.apache.spark.api.r.BaseRRunner.ReaderIterator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IdentityTransformContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QualifiedNameListContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator22$3 Error instrumenting class:org.apache.spark.sql.execution.datasources.v2.parquet.ParquetWriteBuilder [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IntervalContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QuerySpecificationContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.connector.catalog.TableChange.DeleteColumn [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.plans [WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.DateAccessor Error instrumenting class:org.apache.spark.sql.execution.streaming.StreamExecution$ Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.ParquetUtils$FileTypes$ [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.OneHotEncoderModel.OneHotEncoderModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.ImplicitTypeCasts [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveRdd [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleTableIdentifierContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.SchemaPruning.RootField [WARN] Unable to detect inner functions for class:org.apache.spark.sql.connector.catalog.CatalogV2Implicits.BucketSpecHelper [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.ZooKeeperLeaderElectionAgent.LeadershipStatus [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.UnsetTablePropertiesContext [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillTask [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LogisticRegressionWrapper.LogisticRegressionWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IntervalValueContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.FeatureHasher.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel.MultilayerPerceptronClassificationModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.RunLengthEncoding.Decoder [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LogisticRegressionModel.LogisticRegressionModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterChanged [WARN] Unable to detect inner functions for class:org.apache.spark.rdd.DefaultPartitionCoalescer.PartitionLocations [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.StateStoreAwareZipPartitionsHelper [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyWithIndexToValueRowConverter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.GeneralizedLinearRegressionModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.InConversion [WARN] Unable to detect inner functions for class:org.apache.spark.launcher.SparkAppHandle.Listener [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.debug.DebugExec [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LDAWrapper.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.api.python.SerDeUtil.ArrayConstructor [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManager.TempFileBasedBlockStoreUpdater Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.ParquetUtils$ [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestWorkerState [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.KVTypeInfo.Accessor [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.UnsafeSorterSpillMerger.$1 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisteredWorker [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterApplication [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FromClauseContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateKeyWatermarkPredicate [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ColumnReferenceContext [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterChangeAcknowledged [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator2$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QueryTermDefaultContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ComparisonContext Error instrumenting class:org.apache.spark.sql.execution.datasources.v2.orc.OrcWriteBuilder [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetMatchingBlockIds [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutors [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerExecutorStateResponse [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.PythonMLLibAPI.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ApplicationRemoved [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyAndNumValues [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QueryTermContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator19$3 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.StandaloneResourceUtils.MutableResourceInfo [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0.Data Error instrumenting class:org.apache.spark.input.StreamInputFormat [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.CaseWhenCoercion [WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.RetryingBlockFetcher.BlockFetchStarter [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RetrieveSparkAppConfig [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.$$typecreator1$3 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.GaussianMixtureModelWriter.$$typecreator1$3 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator1$5 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ProbabilisticClassificationModel.$$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.BisectingKMeansModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.$typecreator2$2 Error instrumenting class:org.apache.spark.mllib.regression.IsotonicRegressionModel$SaveLoadV1_0$ [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.PrefixSpan.Prefix [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyToNumValuesType Error instrumenting class:org.apache.spark.deploy.history.RollingEventLogFilesWriter$ [WARN] Unable to detect inner functions for class:org.apache.spark.shuffle.sort.io.LocalDiskShuffleMapOutputWriter.$PartitionWriterStream [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.OutputCommitCoordinator.TaskIdentifier [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.InMemoryFileIndex.SerializableBlockLocation [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.stat.test.ChiSqTest.Method [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ExtractWindowExpressions [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.rules.RuleExecutor.Batch [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.ChiSquareTest.ChiSquareResult [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.LevelDB.PrefixCache [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AliasedRelationContext [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.PrefixSpan.FreqSequence [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.DecisionTreeRegressionModel.DecisionTreeRegressionModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator1$4 Error instrumenting class:org.apache.spark.sql.execution.PartitionedFileUtil$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.RelationalGroupedDataset.CubeType [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.FloatType.FloatIsConflicted [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveGroupingAnalytics [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.NodeData [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LogisticRegressionModel.LogisticRegressionModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0.Data [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0.$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.Pipeline.PipelineReader [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.ArrayWrappers.ComparableObjectArray [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.Division [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.OpenHashSet.DoubleHasher [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RetrieveDelegationTokens [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ClassificationModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.MultiUnitsIntervalContext [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.scheduler.ReceiverTracker.TrackerState [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.rules.RuleExecutor.PlanChangeLogger [WARN] Unable to detect inner functions for class:org.apache.spark.sql.streaming.StreamingQueryListener.QueryTerminatedEvent [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.sources.MemorySink.AddedData [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ParenthesizedExpressionContext Error instrumenting class:org.apache.spark.mllib.clustering.LocalLDAModel$SaveLoadV1_0$ [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0.Data [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.dynalloc.ExecutorMonitor.Tracker [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableProviderContext Error instrumenting class:org.apache.spark.sql.execution.command.DDLUtils$ [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.EnsembleModelReadWrite.$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.DecisionTreeRegressorWrapper.DecisionTreeRegressorWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.UnregisterApplication [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SampleByBytesContext [WARN] Unable to detect inner functions for class:org.apache.spark.api.r.BaseRRunner.WriterThread [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterWorkerFailed [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.SplitData [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.optimizer.JoinReorderDP.JoinPlan [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.MergeIntoTableContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.expressions [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.history.EventFilter.FilterStatistics [WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.CatalogImpl.$$typecreator1$4 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerExecutorStateResponse [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateFunctionContext [WARN] Unable to detect inner functions for class:org.apache.spark.network.client.TransportClient.$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator1$7 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LDAWrapper.LDAWrapperReader Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport$ [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.LocationSpecContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.PartitioningUtils.PartitionValues [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper.GeneralizedLinearRegressionWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Tokenizer.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.DriverStatusResponse [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.NaiveBayes.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.KMeansModelReader.$$typecreator4$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.MinMaxScalerModelWriter.$$typecreator1$2 Error instrumenting class:org.apache.spark.sql.execution.command.LoadDataCommand$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.ResolveCatalogs.NonSessionCatalogAndTable [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.optimization.LBFGS.CostFun [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.OneHotEncoderModel.OneHotEncoderModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.executor.CoarseGrainedExecutorBackend.Arguments [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.DecisionTreeRegressionModel.$$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.NGram.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TablePropertyKeyContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.param.shared.SharedParamsCodeGen.ParamDesc [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.GaussianMixtureModel.SaveLoadV1_0.$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator21$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.HDFSBackedStateStore.COMMITTED [WARN] Unable to detect inner functions for class:org.apache.spark.network.client.TransportClientFactory.ClientPool [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.RandomForestRegressionModel.$$typecreator1$1 Error instrumenting class:org.apache.spark.scheduler.SplitInfo$ [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowNamespacesContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ExtractContext [WARN] Unable to detect inner functions for class:org.apache.spark.SparkBuildInfo [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AddTablePartitionContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SmallIntLiteralContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.python.MLSerDe.SparseMatrixPickler [WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator1$11 Error instrumenting class:org.apache.spark.api.python.DoubleArrayWritable [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV2_0 Error instrumenting class:org.apache.spark.ml.tuning.TrainValidationSplitModel$TrainValidationSplitModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.connector.catalog.CatalogV2Implicits.TransformHelper [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.LSHModel.$$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.StreamingGlobalLimitStrategy Error instrumenting class:org.apache.spark.sql.execution.datasources.orc.OrcUtils$ [WARN] Unable to detect inner functions for class:org.apache.spark.api.java.JavaUtils.SerializableMapWrapper [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyToNumValuesStore [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.DecisionTreeRegressorWrapper.DecisionTreeRegressorWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.AnalysisErrorAt [WARN] Unable to detect inner functions for class:org.apache.spark.ui.JettyUtils.ServletParams [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.rules.RuleExecutor.Once [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RepairTableContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.EnsembleModelReadWrite.EnsembleNodeData [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator17$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.RowToColumnConverter.FloatConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.DataSource.SourceInfo [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.SparkSubmitUtils.MavenCoordinate [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.aggregate.ApproximatePercentile.PercentileDigest [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NamedExpressionContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.Std [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RowConstructorContext [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV1_0 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.OptimizeMetadataOnlyQuery.PartitionedRelation [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AggregationClauseContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.WindowFunctionType.Python [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorSizeHint.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.AssociationRules.Rule [WARN] Unable to detect inner functions for class:org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette.ClusterStats [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.RepeatedGroupConverter [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeans.ClusterSummaryAggregator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.Sequence.SequenceImpl Error instrumenting class:org.apache.spark.streaming.CheckpointWriter$CheckpointWriteHandler [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.WindowDefContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ErrorIdentContext [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalSorter.SpillReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveNewInstance [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.PCAModelWriter.$$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.util.IntervalUtils.ParseState [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ErrorCapturingIdentifierContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.fpm.FPGrowthModel.FPGrowthModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.DateConverter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.optim.WeightedLeastSquares.Aggregator [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalSorter.IteratorForPartition [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PivotColumnContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.MapConverter [WARN] Unable to detect inner functions for class:org.apache.spark.ExecutorAllocationManager.StageAttempt [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.util.IntervalUtils.IntervalUnit [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetQuantifierContext [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.client.StandaloneAppClient.ClientEndpoint [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator5$2 [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveShuffle [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CtesContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.plans.DslLogicalPlan [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GBTRegressorWrapper.GBTRegressorWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.FMClassificationModel.FMClassificationModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.$$typecreator4$1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.IsotonicRegressionModel.SaveLoadV1_0.$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.KMeansWrapper.KMeansWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator1$3 [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.scheduler.ReceiverTracker.ReceiverTrackerEndpoint [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.BisectingKMeansWrapper.BisectingKMeansWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.IdentityProjection [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.PowerIterationClustering.$$typecreator5$1 Error instrumenting class:org.apache.spark.internal.io.HadoopMapRedCommitProtocol [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.UnsafeInMemorySorter.$SortedIterator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.MultiInsertQueryContext Error instrumenting class:org.apache.spark.sql.execution.datasources.DaysWritable [WARN] Unable to detect inner functions for class:org.apache.spark.ml.fpm.FPGrowthModel.FPGrowthModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel.MultilayerPerceptronClassificationModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.ChiSquareTest.$typecreator5$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SubqueryContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator3$3 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.WidenSetOperationTypes [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AssignmentListContext [WARN] Unable to detect inner functions for class:org.apache.spark.network.crypto.TransportCipher.EncryptionHandler [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.StandaloneResourceUtils.StandaloneResourceAllocation [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LDAModel.$$typecreator2$1 Error instrumenting class:org.apache.spark.internal.io.HadoopMapReduceWriteConfigUtil [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GBTRegressionModel.GBTRegressionModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.RobustScalerModel.RobustScalerModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LogisticRegressionWrapper.LogisticRegressionWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.DateTimeOperations [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator13$2 [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SetupDriver [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.Word2VecModel.SaveLoadV1_0.$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetMatchingBlockIds [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyWithIndexAndValue [WARN] Unable to detect inner functions for class:org.apache.spark.InternalAccumulator.output [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.CatalystTypeConverter [WARN] Unable to detect inner functions for class:org.apache.spark.SparkConf.DeprecatedConfig [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StrictIdentifierContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.RowToColumnConverter.DecimalConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ComparisonOperatorContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.EltCoercion [WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SparkSaslClient.$ClientCallbackHandler [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.DriverStatusResponse [WARN] Unable to detect inner functions for class:org.apache.spark.network.protocol.Encoders.IntArrays [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.StandardScalerModelWriter.$$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator11$1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayes.$$typecreator5$1 [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.BlacklistTracker.ExecutorFailureList [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.ResolveTableValuedFunctions.ArgumentList Error instrumenting class:org.apache.spark.sql.execution.datasources.v2.parquet.ParquetScan [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowCurrentNamespaceContext [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.receiver.BlockGenerator.Block [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.EnsembleModelReadWrite.$typecreator4$2 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator1$3 [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SparkAppConfig [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.BigIntLiteralContext Error instrumenting class:org.apache.spark.sql.execution.datasources.FilePartition$ [WARN] Unable to detect inner functions for class:org.apache.spark.network.server.TransportServer.$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PivotClauseContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.connector.catalog.TableChange.UpdateColumnPosition [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.Count [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RetrieveLastAllocatedExecutorId [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StopWordsRemover.$$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.SignedPrefixComparatorDesc [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.LongDelta.Encoder [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.WindowFunctionType.SQL Error instrumenting class:org.apache.spark.sql.execution.streaming.CheckpointFileManager$RenameHelperMethods [WARN] Unable to detect inner functions for class:org.apache.spark.sql.connector.catalog.TableChange.UpdateColumnNullability [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.GaussianMixtureModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.FloatConverter [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillExecutor [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.MaxAbsScalerModelWriter.$$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.BinaryPrefixComparator [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.CountVectorizerModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.FMClassificationModel.FMClassificationModelWriter.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PivotValueContext [WARN] Unable to detect inner functions for class:org.apache.spark.rdd.InputFileBlockHolder.FileBlock [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FailNativeCommandContext Error instrumenting class:org.apache.spark.api.python.WriteInputFormatTestDataGenerator$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ExpressionContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCheckResult.TypeCheckFailure [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleTableSchemaContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.optimizer.StarSchemaDetection.TableAccessCardinality [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Data [WARN] Unable to detect inner functions for class:org.apache.spark.status.ElementTrackingStore.WriteSkippedQueue [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManager.EncryptedDownloadFile [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.StringIndexModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.optim.WeightedLeastSquares.Cholesky [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LDAWrapper.LDAWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator1$16 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PredicateOperatorContext [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.stat.test.KolmogorovSmirnovTest.NullHypothesis Error instrumenting class:org.apache.spark.deploy.master.ui.MasterWebUI [WARN] Unable to detect inner functions for class:org.apache.spark.sql.RelationalGroupedDataset.GroupType [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.ArrayConverter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Family [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.history.AppListingListener.MutableAttemptInfo [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.BlockManagerHeartbeat Error instrumenting class:org.apache.spark.sql.execution.datasources.DataSource$ [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StopExecutors [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillDriver Error instrumenting class:org.apache.spark.mllib.clustering.GaussianMixtureModel$SaveLoadV1_0$ [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.GaussianMixtureModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.RelationalGroupedDataset.RollupType [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator17$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator25$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DropTableContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableValuedFunctionContext Error instrumenting class:org.apache.spark.sql.execution.datasources.v2.json.JsonTable [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.IntDelta.Decoder [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.StopBlockManagerMaster [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.ALSModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0.$typecreator1$4 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.OneForOneBlockFetcher.$1 [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.InMemoryStore.NaturalKeys [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.aggregate.TypedAverage.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.history.EventFilter.FilterStatistics [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeFixedWidthAggregationMap.$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetNamespaceLocationContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.Min [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LDAWrapper.$$typecreator2$2 Error instrumenting class:org.apache.spark.sql.execution.datasources.v2.text.TextScan Error instrumenting class:org.apache.spark.sql.execution.streaming.state.StateStoreProvider$ [WARN] Unable to detect inner functions for class:org.apache.spark.storage.StorageStatus.NonRddStorageInfo [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ColTypeContext [WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.ExternalBlockHandler.$ShuffleManagedBufferIterator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.python.WindowInPandasExec.WindowBoundType [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.OrderedIdentifierContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.FileFormatWriter.OutputSpec [WARN] Unable to detect inner functions for class:org.apache.spark.ml.util.DatasetUtils.$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.RandomForestClassificationModel.RandomForestClassificationModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerLatestState [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator16$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAttributeRewriter.VectorAttributeRewriterWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator2$3 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.connector.catalog.TableChange.ColumnChange [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeExternalRowSorter.PrefixComputer [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRestModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.PromoteStrings [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.StreamingDeduplicationStrategy [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.Window [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.RowToColumnConverter.MapConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ReplaceTableContext [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.ArrayWrappers.ComparableLongArray [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManager.EncryptedDownloadFile.$EncryptedDownloadWritableChannel [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InsertIntoContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CommentTableContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.RandomForestRegressionModel.RandomForestRegressionModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetLocations [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LinearSVCModel.LinearSVCWriter.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.Rating [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NumericLiteralContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveBinaryArithmetic [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator12$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PartitionValContext [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.BlockManagerHeartbeat [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.RowToColumnConverter.ArrayConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QueryPrimaryDefaultContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LinearSVCModel.LinearSVCWriter Error instrumenting class:org.apache.spark.sql.execution.datasources.v2.orc.OrcScan$ [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Tweedie [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel.BucketedRandomProjectionLSHModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.UnsignedPrefixComparator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.WhereClauseContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorSizeHint.$$typecreator3$1 Error instrumenting class:org.apache.spark.sql.execution.datasources.TextBasedFileFormat [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator2$4 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InlineTableContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.GBTClassificationModel.GBTClassificationModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary.$$typecreator1$3 [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator2$6 [WARN] Unable to detect inner functions for class:org.apache.spark.network.util.TransportFrameDecoder.Interceptor [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerRemoved [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.FixedLengthRowBasedKeyValueBatch.$1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDe.LabeledPointPickler [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.CoalesceExec.EmptyPartition [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Index [WARN] Unable to detect inner functions for class:org.apache.spark.sql.connector.catalog.TableChange.RemoveProperty [WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.SuccessFetchResult [WARN] Unable to detect inner functions for class:org.apache.spark.network.util.LevelDBProvider.LevelDBLogger [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.OverlayContext [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.ToBlockManagerSlave Error instrumenting class:org.apache.spark.ml.tuning.CrossValidatorModel$CrossValidatorModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetArrayConverter.ElementConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleInsertQueryContext [WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.SuccessFetchResult [WARN] Unable to detect inner functions for class:org.apache.spark.shuffle.sort.ShuffleInMemorySorter.1 [WARN] Unable to detect inner functions for class:org.apache.spark.network.client.TransportClientFactory.$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.errors.TreeNodeException [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.PassThrough.Encoder [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionModel.IsotonicRegressionModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.$$typecreator5$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RefreshTableContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.RandomForestClassifierWrapper.RandomForestClassifierWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.DecisionTreeClassificationModel.DecisionTreeClassificationModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Index [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.StringIndexModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.StateStoreType [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.recommendation.MatrixFactorizationModel.SaveLoadV1_0.$typecreator13$1 [WARN] Unable to detect inner functions for class:org.apache.spark.launcher.SparkAppHandle.State [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.aggregate.TypedAverage.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.UnsignedPrefixComparatorDescNullsFirst [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutorsOnHost [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator8$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.MapConverter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TypeConstructorContext Error instrumenting class:org.apache.spark.SSLOptions [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestDriverStatus [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.streaming.InternalOutputModes.Append [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.orc.OrcDeserializer.ArrayDataUpdater [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CastContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.RelationalGroupedDataset.PivotType Error instrumenting class:org.apache.spark.sql.execution.datasources.v2.orc.OrcTable [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.ALSModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator1$11 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LocalLDAModel.LocalLDAModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableAliasContext [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestExecutors [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator6$3 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Logit [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.SchemaPruning.RootField Error instrumenting class:org.apache.spark.kafka010.KafkaDelegationTokenProvider [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.continuous.TextSocketContinuousStream.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ImplicitOperators [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorSlicer.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StopExecutor [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.AFTSurvivalRegressionModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.connector.catalog.LookupCatalog.AsTableIdentifier Error instrumenting class:org.apache.spark.input.WholeTextFileInputFormat [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveRdd [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator18$2 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.BooleanExpressionContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveMissingReferences [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveRandomSeed [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.UnquotedIdentifierContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.PrimitiveConverter [WARN] Unable to detect inner functions for class:org.apache.spark.shuffle.sort.io.LocalDiskShuffleMapOutputWriter.$LocalDiskShufflePartitionWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveNaturalAndUsingJoin [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.LaunchDriver Error instrumenting class:org.apache.spark.deploy.history.HistoryServer Error instrumenting class:org.apache.spark.sql.execution.streaming.ManifestFileCommitProtocol [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.CountVectorizerModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.Word2VecModel.SaveLoadV1_0.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator14$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateValueWatermarkPredicate Error instrumenting class:org.apache.spark.api.python.TestOutputKeyConverter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.LSHModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QualifiedColTypeWithPositionListContext [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.optimization.NNLS.Workspace [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRestModel.$$typecreator7$1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.linalg.distributed.RowMatrix.$SVDMode$1$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DataTypeContext [WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SparkSaslServer.1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.UnitToUnitIntervalContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QuotedIdentifierContext Error instrumenting class:org.apache.spark.sql.execution.datasources.binaryfile.BinaryFileFormat [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.Word2VecModel.SaveLoadV1_0 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IndexToString.$$typecreator1$4 Error instrumenting class:org.apache.spark.api.python.TestWritable [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.WindowClauseContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.BisectingKMeansModel.BisectingKMeansModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SparkAppConfig [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ProbabilisticClassificationModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.WriteStyle.RawStyle [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.SendHeartbeat [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerRemoved [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LogisticRegressionModel.LogisticRegressionModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.RandomForestClassificationModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.Decimal.DecimalAsIfIntegral [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeMax [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.kafka010.KafkaDataConsumer.NonCachedKafkaDataConsumer [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.LeftSide [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.optimizer.Optimizer.OptimizeSubqueries [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.stat.StatFunctions.CovarianceCounter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator19$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RecoverPartitionsContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.MaxAbsScalerModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0.$typecreator1$3 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette.$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyToValuePair [WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.SQLConf.ParquetOutputTimestampType Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.ParquetReadSupport [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PolynomialExpansion.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator1$6 [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.OpenHashSet.Hasher Error instrumenting class:org.apache.spark.input.StreamBasedRecordReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator7$3 [WARN] Unable to detect inner functions for class:org.apache.spark.executor.Executor.TaskReaper [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator4$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.HDFSBackedStateStore.STATE [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.LocalIndexEncoder [WARN] Unable to detect inner functions for class:org.apache.spark.ml.Pipeline.SharedReadWrite [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.IsExecutorAlive [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.StandardScalerModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TransformListContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.SQLConf.Replaced [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.StringType.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.Deserializer [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QualifiedColTypeWithPositionContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.InMemoryFileIndex.SerializableFileStatus [WARN] Unable to detect inner functions for class:org.apache.spark.network.server.RpcHandler.1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.CrossValidator.CrossValidatorWriter [WARN] Unable to detect inner functions for class:org.apache.spark.network.server.TransportRequestHandler.$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.CreateStageResult [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor Error instrumenting class:org.apache.spark.WritableConverter$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetMapConverter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.PCAModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AnsiNonReservedContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.rules.RuleExecutor.FixedPoint [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterInStandby [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.ProgressReporter.ExecutionStats [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.catalog.InMemoryCatalog.DatabaseDesc [WARN] Unable to detect inner functions for class:org.apache.spark.sql.connector.catalog.TableChange.After [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.ChainedIterator [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillDriverResponse [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.SizeTracker.Sample [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StopWordsRemover.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.connector.catalog.CatalogV2Implicits.IdentifierHelper [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV1_0.$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator23$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.FloatType.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.python.MLSerDe.SparseVectorPickler [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetLocationsAndStatus [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DereferenceContext Error instrumenting class:org.apache.spark.sql.execution.datasources.csv.MultiLineCSVDataSource$ Error instrumenting class:org.apache.spark.deploy.security.HBaseDelegationTokenProvider [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.LSHModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveBlock [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.DriverEndpoint [WARN] Unable to detect inner functions for class:org.apache.spark.internal.io.FileCommitProtocol.TaskCommitMessage [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetExecutorEndpointRef [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Link [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IntegerLiteralContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.GeneralizedLinearRegressionModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.LookupFunctions [WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.BooleanAccessor [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.streaming.InternalOutputModes.Complete Error instrumenting class:org.apache.spark.input.StreamFileInputFormat [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator1$4 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AnalyzeContext [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.InMemoryStore.InstanceList.CountingRemoveIfForEach [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.ChiSquareTest.ChiSquareResult [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.TimestampConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowFunctionsContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.DoubleAccessor [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StructContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.connector.catalog.CatalogV2Implicits.MultipartIdentifierHelper [WARN] Unable to detect inner functions for class:org.apache.spark.MapOutputTrackerMaster.MessageLoop [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TransformQuerySpecificationContext Error instrumenting class:org.apache.spark.metrics.sink.PrometheusServlet [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.kafka010.KafkaDataConsumer.NonCachedKafkaDataConsumer Error instrumenting class:org.apache.spark.ml.image.SamplePathFilter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PredicatedContext [WARN] Unable to detect inner functions for class:org.apache.spark.network.server.RpcHandler.OneWayRpcCallback [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RelationPrimaryContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.NaiveBayes.$$typecreator8$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NamespaceContext [WARN] Unable to detect inner functions for class:org.apache.spark.rdd.NewHadoopRDD.NewHadoopMapPartitionsWithSplitRDD [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.RowToColumnConverter.StructConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InsertOverwriteDirContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.GeneralizedLinearRegressionModelWriter.$$typecreator1$2 Error instrumenting class:org.apache.spark.input.FixedLengthBinaryInputFormat [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator1$5 [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.scheduler.StreamingListenerBus.WrappedStreamingListenerEvent Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase$NullIntIterator [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel.BucketedRandomProjectionLSHModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SortItemContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveSubquery [WARN] Unable to detect inner functions for class:org.apache.spark.network.client.TransportClient.$RpcChannelListener [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.TreeEnsembleModel.SaveLoadV1_0.EnsembleNodeData [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.Heartbeat [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LinearSVCModel.LinearSVCWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.dstream.ReceiverInputDStream.ReceiverRateController [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.RowToColumnConverter.TypeConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Key [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.HashingTF.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.connector.catalog.NamespaceChange.1 [WARN] Unable to detect inner functions for class:org.apache.spark.rdd.HadoopRDD.HadoopMapPartitionsWithSplitRDD [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.LocalLDAModel.SaveLoadV1_0.$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Word2VecModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.graphx.util.BytecodeUtils.MethodInvocationFinder [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ExponentLiteralContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.StringLiteralCoercion [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.BooleanEquality [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.ByteConverter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.VectorIndexerModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SparkSaslServer.$DigestCallbackHandler [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinExec.OneSideHashJoiner [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.OpenHashSet.IntHasher [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.HDFSBackedStateStore.ABORTED [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV2_0.$typecreator1$3 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.ByteType.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator1$10 Error instrumenting class:org.apache.spark.sql.execution.datasources.PartitioningUtils$ [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillDriver [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.trees.TreeNodeRef [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveDeserializer [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.$$typecreator1$2 Error instrumenting class:org.apache.spark.sql.execution.datasources.v2.csv.CSVTable [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.UpdateDelegationTokens [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV1_0 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.IDFModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.network.protocol.Encoders.ByteArrays [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDe.SparseMatrixPickler [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PartitionSpecContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DescribeQueryContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.PCAModelWriter.Data Error instrumenting class:org.apache.spark.sql.execution.streaming.CheckpointFileManager$ [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManager.BlockStoreUpdater [WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.FetchRequest [WARN] Unable to detect inner functions for class:org.apache.spark.network.client.TransportClient.$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.UncacheTableContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.plans.logical.statsEstimation.EstimationUtils.OverlappedRange [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.MatchedActionContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions.DslSymbol [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.Max Error instrumenting class:org.apache.spark.sql.internal.SharedState$ Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ElementwiseProduct.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.DecisionTreeRegressionModel.DecisionTreeRegressionModelWriter.$$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.StateStoreAwareZipPartitionsRDD [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.StreamingJoinStrategy [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManager.ShuffleMetricsSource [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeansModel.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.SQLConf.DeprecatedConfig [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ComplexDataTypeContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.InMemoryScans [WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator1$9 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.aggregate.ApproxCountDistinctForIntervals.LongArrayInternalRow [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.ALSWrapper.ALSWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ValueExpressionContext Error instrumenting class:org.apache.spark.sql.execution.datasources.v2.text.TextTable [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.Schema [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QuotedIdentifierAlternativeContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.FunctionArgumentConversion [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator9$3 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetFilters.ParquetPrimitiveField [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterStateResponse [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.kafka010.KafkaDataConsumer.CachedKafkaDataConsumer [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.InMemoryFileIndex.SerializableFileStatus [WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.StructAccessor [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.RFormulaModel.RFormulaModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ProbabilisticClassificationModel.$$typecreator5$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.Aggregation [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.KolmogorovSmirnovTest.KolmogorovSmirnovTestResult [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetMemoryStatus Error instrumenting class:org.apache.spark.ml.source.libsvm.LibSVMFileFormat [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.AttributeSeq [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.$$typecreator2$1 Error instrumenting class:org.apache.spark.mllib.tree.model.TreeEnsembleModel$SaveLoadV1_0$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator11$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator2$3 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.HDFSBackedStateStore.UPDATING [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.CrossValidator.CrossValidatorReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.v2.DataSourceV2Implicits.TableHelper [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator16$3 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.recommendation.MatrixFactorizationModel.SaveLoadV1_0.$typecreator5$1 Error instrumenting class:org.apache.spark.deploy.history.EventLogFileReader$ [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestExecutors Error instrumenting class:org.apache.spark.sql.execution.datasources.DataSourceUtils$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.connector.catalog.TableChange.UpdateColumnType [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetLocations [WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.SQLConf.RemovedConfig [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.UnsignedPrefixComparatorDesc [WARN] Unable to detect inner functions for class:org.apache.spark.ui.JettyUtils.ServletParams [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.continuous.ContinuousQueuedDataReader.ContinuousRow [WARN] Unable to detect inner functions for class:org.apache.spark.shuffle.sort.io.LocalDiskShuffleMapOutputWriter.$PartitionWriterChannel [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveAggAliasInGroupBy [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.optimizer.PushLeftSemiLeftAntiThroughJoin.AllowedJoin [WARN] Unable to detect inner functions for class:org.apache.spark.shuffle.sort.io.LocalDiskShuffleMapOutputWriter.1 Error instrumenting class:org.apache.spark.deploy.rest.RestSubmissionServer [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV2_0.$typecreator1$3 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LinearSVCModel.LinearSVCWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.CountVectorizerModelWriter.$$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ProbabilisticClassificationModel.$$typecreator6$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Word2VecModelWriter.$$typecreator4$1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDeBase.BasePickler [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Binarizer.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.KMeansModel.Cluster [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.KMeansModel.Cluster [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinHashLSHModel.MinHashLSHModelWriter.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.SpecialLimits [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator13$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeM2n [WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.SQLConf.PartitionOverwriteMode [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LinearSVCWrapper.LinearSVCWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.SparkConf.DeprecatedConfig [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRestModel.$$typecreator4$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.Sequence.TemporalSequenceImpl [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.FlatMapGroupsWithStateExec.InputProcessor [WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator1$17 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterChanged [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.NaiveBayesWrapper.NaiveBayesWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.RandomForestClassifierWrapper.RandomForestClassifierWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.util.DatasetUtils.$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.NaiveBayesModel.NaiveBayesModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.$$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.SubmitDriverResponse [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.MaxAbsScalerModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.StringIndexModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.Schema [WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.ArrowVectorAccessor [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel.MultilayerPerceptronClassificationModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GBTRegressionModel.GBTRegressionModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ErrorCapturingUnitToUnitIntervalContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NamedWindowContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.MapAccessor Error instrumenting class:org.apache.spark.sql.execution.command.CommandUtils$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.IdentityConverter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CacheTableContext [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0.Data [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.Shutdown [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.streaming.InternalOutputModes.Update [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArray.SpillableArrayIterator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyWithIndexToValueRowConverterFormatV2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.python.WindowInPandasExec.WindowBoundType [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetMapConverter.$KeyValueConverter [WARN] Unable to detect inner functions for class:org.apache.spark.network.protocol.Encoders.StringArrays [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GaussianMixtureWrapper.GaussianMixtureWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.aggregate.TypedAverage.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.OneHotEncoderModel.OneHotEncoderModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.DecisionTreeClassifierWrapper.DecisionTreeClassifierWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StorageHandlerContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ReplaceTableHeaderContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.continuous.ContinuousQueuedDataReader.ContinuousRow Error instrumenting class:org.apache.spark.input.Configurable [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.ChiSqSelectorModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.InMemoryStore.1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LDAWrapper.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.DataType.JSortedObject [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.Decimal.DecimalIsFractional [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.PowerIterationClustering.Assignment [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.DecimalType.Expression [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.RatingBlock Error instrumenting class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator3$4 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.codegen.GenerateUnsafeProjection.Schema [WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.OneForOneBlockFetcher.$ChunkCallback [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DropFunctionContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FrameBoundContext [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.SignedPrefixComparator [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.FMRegressionModel.FMRegressionModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.RowToColumnConverter.DoubleConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetFilters.ParquetSchemaType [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ColumnPruner.ColumnPrunerWriter.$$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.NodeData [WARN] Unable to detect inner functions for class:org.apache.spark.SparkConf.AlternateConfig [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisteredApplication [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.RobustScalerModel.RobustScalerModelWriter.$$typecreator1$2 Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.ParquetUtils$FileTypes [WARN] Unable to detect inner functions for class:org.apache.spark.ml.evaluation.CosineSilhouette.$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.DCT.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tuning.TrainValidationSplit.TrainValidationSplitWriter [WARN] Unable to detect inner functions for class:org.apache.spark.ExecutorAllocationManager.StageAttempt [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.UnsignedPrefixComparatorNullsLast [WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.FetchBlockInfo Error instrumenting class:org.apache.spark.sql.execution.datasources.csv.TextInputCSVDataSource$ [WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.ExternalShuffleBlockResolver.$2 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.VertexData [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ExecutorAdded [WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.FetchBlockInfo [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateViewContext [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetBlockStatus [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveFunctions [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator18$1 [WARN] Unable to detect inner functions for class:org.apache.spark.shuffle.sort.ShuffleInMemorySorter.SortComparator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator6$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.Sequence.IntegralSequenceImpl [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.SerializerBuildHelper.MapElementInformation [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.IsExecutorAlive [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator5$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.FMClassificationModel.FMClassificationModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.StatefulAggregationStrategy [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.FPGrowthWrapper.FPGrowthWrapperReader Error instrumenting class:org.apache.spark.ui.ProxyRedirectHandler$ResponseWrapper [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableIdentifierContext [WARN] Unable to detect inner functions for class:org.apache.spark.executor.ExecutorMetricsSource.ExecutorMetricGauge Error instrumenting class:org.apache.spark.sql.execution.datasources.v2.json.JsonScan [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDe.RatingPickler [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LDAModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.SplitData [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetOperationContext [WARN] Unable to detect inner functions for class:org.apache.spark.network.protocol.MessageDecoder.1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.LocalLDAModel.SaveLoadV1_0.Data$ [WARN] Unable to detect inner functions for class:org.apache.spark.rdd.DefaultPartitionCoalescer.partitionGroupOrdering [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.continuous.ContinuousQueuedDataReader.EpochMarker [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GaussianMixtureWrapper.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.StateStore.MaintenanceTask [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerLatestState [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DescribeColNameContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.JoinTypeContext Error instrumenting class:org.apache.spark.sql.execution.datasources.orc.OrcFileFormat [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.DecisionTreeRegressionModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.joins.BuildSide [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRestModel.OneVsRestModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.unsafe.map.BytesToBytesMap.1 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ExecutorAdded [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.ALSWrapper.ALSWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestSubmitDriver [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.MultipartIdentifierContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ArithmeticBinaryContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.CatalogImpl.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator1$12 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableFileFormatContext [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.PredictData [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.OneHotEncoderModel.OneHotEncoderModelWriter.$$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator8$3 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.joins.BuildRight [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.analysis.DetectAmbiguousSelfJoin.ColumnReference [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.InConversion [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ColumnPruner.ColumnPrunerWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ExplainContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.WriteStyle.FlattenStyle [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0.Data Error instrumenting class:org.apache.spark.sql.execution.streaming.CheckpointFileManager$CancellableFSDataOutputStream [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.RandomForestClassificationModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.GeneralizedLinearRegressionModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator1$13 Error instrumenting class:org.apache.spark.ui.JettyUtils$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.UseContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeKVExternalSorter.$KVSorterIterator [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalSorter.SpillableIterator [WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherServer.$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.SparkSession.implicits [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveOrdinalInOrderByAndGroupBy [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManager.RemoteBlockDownloadFileManager.ReferenceWithCleanup [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.ChiSqSelectorModelWriter.Data Error instrumenting class:org.apache.spark.input.FixedLengthBinaryRecordReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.VectorIndexerModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator15$2 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.Pipeline.PipelineWriter Error instrumenting class:org.apache.spark.internal.io.HadoopMapRedWriteConfigUtil [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QueryPrimaryContext [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.StopAppClient [WARN] Unable to detect inner functions for class:org.apache.spark.security.CryptoStreamUtils.ErrorHandlingWritableChannel [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.PowerIterationClusteringModel.SaveLoadV1_0 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.LongAccessor [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator1$4 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.CoalesceExec.EmptyPartition [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.NaiveBayesWrapper.NaiveBayesWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Identity [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.QueryExecution.debug [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.NaiveBayes.$$typecreator5$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.debug.DebugExec [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.GroupingSetContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Normalizer.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.attribute.AttributeType.Nominal$1$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.ShortAccessor [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.OutputCommitCoordinator.TaskIdentifier [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.RatingBlock [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.RobustScalerModel.RobustScalerModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.MetricsAggregate [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.LaunchedExecutor Error instrumenting class:org.apache.spark.sql.execution.datasources.PartitionPath$ Error instrumenting class:org.apache.spark.sql.execution.datasources.v2.csv.CSVWriteBuilder Error instrumenting class:org.apache.spark.sql.execution.datasources.SchemaMergeUtils$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.Sequence.DefaultStep [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.FlatMapGroupsWithStateExecHelper.StateManagerImplV2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ResourceContext [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetPeers [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.LevelDBTypeInfo.$Index [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.continuous.TextSocketContinuousStream.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.IsotonicRegressionModel.SaveLoadV1_0.Data$ [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.FMRegressionModel.FMRegressionModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.v2.V2SessionCatalog.CatalogDatabaseHelper [WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.StructConverter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Binarizer.$$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.NGram.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.HavingClauseContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ConstantContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveTempViews [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.KMeansModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions.DslExpression [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.LSHModel.$$typecreator2$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator10$1 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerSchedulerStateResponse [WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.ArrayAccessor [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.CreateStageResult [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Subscript [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveReferences [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.CoalesceExec.EmptyRDDWithPartitions [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.SignedPrefixComparatorDescNullsFirst [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.VectorIndexerModelWriter.Data Error instrumenting class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$StoreFile$ [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ExecutorStateChanged [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PredicateContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinHashLSHModel.MinHashLSHModelWriter.Data Error instrumenting class:org.apache.spark.sql.catalyst.parser.ParserUtils$EnhancedLogicalPlan$ [WARN] Unable to detect inner functions for class:org.apache.spark.shuffle.sort.ShuffleInMemorySorter.ShuffleSorterIterator [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalAppendOnlyMap.HashComparator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.ConcatCoercion [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestKillDriver [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestSubmitDriver [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.optimizer.JoinReorderDP.JoinPlan Error instrumenting class:org.apache.spark.deploy.worker.ui.WorkerWebUI [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.receiver.BlockGenerator.GeneratorState [WARN] Unable to detect inner functions for class:org.apache.spark.InternalAccumulator.shuffleWrite [WARN] Unable to detect inner functions for class:org.apache.spark.ml.python.MLSerDe.DenseMatrixPickler Error instrumenting class:org.apache.spark.metrics.MetricsSystem [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDe.DenseVectorPickler [WARN] Unable to detect inner functions for class:org.apache.spark.executor.ExecutorMetricsPoller.TCMP Error instrumenting class:org.apache.spark.status.api.v1.PrometheusResource$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveNamespace [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.IDFModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.FlatMapGroupsWithStateExecHelper.StateManagerImplBase [WARN] Unable to detect inner functions for class:org.apache.spark.sql.connector.catalog.TableChange.RenameColumn [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.LaunchExecutor [WARN] Unable to detect inner functions for class:org.apache.spark.ml.util.DefaultParamsReader.Metadata [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.DecisionTreeClassificationModel.DecisionTreeClassificationModelWriter.$$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.GeneralizedLinearRegressionModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Interaction.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator17$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.xml.UDFXPathUtil.ReusableStringReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.AFTSurvivalRegressionModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StringLiteralContext [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeansModel.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.util.RebaseDateTime.JsonRebaseRecord [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.BoundPortsResponse [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.FMRegressionModel.FMRegressionModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions.ImplicitAttribute [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionModel.$$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel.BucketedRandomProjectionLSHModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.util.Utils.Lock [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.debug.DebugExec.$ColumnMetrics$ [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.BisectingKMeansModel.BisectingKMeansModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.BooleanValueContext Error instrumenting class:org.apache.spark.sql.execution.datasources.v2.parquet.ParquetTable [WARN] Unable to detect inner functions for class:org.apache.spark.sql.streaming.StreamingQueryListener.QueryStartedEvent [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.VertexData [WARN] Unable to detect inner functions for class:org.apache.spark.util.JsonProtocol.TASK_END_REASON_FORMATTED_CLASS_NAMES [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.analysis.DetectAmbiguousSelfJoin.ColumnReference [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.$$typecreator5$3 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.FlatMapGroupsWithStateStrategy [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.EltCoercion [WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.protocol.BlockTransferMessage.Type [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InsertOverwriteTableContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetClauseContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ExtractGenerator [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixture.$$typecreator4$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.connector.catalog.LookupCatalog.CatalogAndIdentifier [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.CompleteRecovery [WARN] Unable to detect inner functions for class:org.apache.spark.graphx.impl.ShippableVertexPartition.ShippableVertexPartitionOpsConstructor [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterWorker [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.FPGrowthWrapper.FPGrowthWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.StandardScalerModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.EquivalentExpressions.Expr [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.OpenHashMapBasedStateMap.LimitMarker Error instrumenting class:org.apache.spark.sql.execution.streaming.FileContextBasedCheckpointFileManager [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator11$3 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SampleByPercentileContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.StringIndexerModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV1_0.$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ComplexColTypeListContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.$$typecreator2$1 Error instrumenting class:org.apache.spark.sql.execution.datasources.json.JsonFileFormat [WARN] Unable to detect inner functions for class:org.apache.spark.status.ElementTrackingStore.Trigger [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.AFTSurvivalRegressionModelWriter.$$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveSubqueryColumnAliases [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator2$4 [WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.FetchResult [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.BinaryType.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Gaussian [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowColumnsContext [WARN] Unable to detect inner functions for class:org.apache.spark.network.util.LevelDBProvider.StoreVersion [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.UncompressedInBlockSort [WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.ExternalBlockHandler.$ShuffleMetrics [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LDA.LDAReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAssembler.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.FlatMapGroupsWithStateExecHelper.StateData [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.EquivalentExpressions.Expr [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.VectorIndexerModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.dynalloc.ExecutorMonitor.ShuffleCleanedEvent [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Log [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.RobustScalerModel.RobustScalerModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ApplicationFinished Error instrumenting class:org.apache.spark.sql.execution.streaming.FileStreamSinkLog [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.RobustScalerModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Tweedie [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalAppendOnlyMap.ExternalIterator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.LogicalNotContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.adaptive.OptimizeLocalShuffleReader.BroadcastJoinWithShuffleRight [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator20$2 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PolynomialExpansion.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.StandardScalerModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.attribute.AttributeType.Binary$1$ [WARN] Unable to detect inner functions for class:org.apache.spark.util.SizeEstimator.ClassInfo [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.LaunchTask [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.RepeatedConverter [WARN] Unable to detect inner functions for class:org.apache.spark.graphx.PartitionStrategy.RandomVertexCut [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.HintContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StandardScalerModel.StandardScalerModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.scheduler.StreamingListenerBus.WrappedStreamingListenerEvent [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.PowerIterationClustering.Assignment [WARN] Unable to detect inner functions for class:org.apache.spark.status.ElementTrackingStore.WriteQueued [WARN] Unable to detect inner functions for class:org.apache.spark.unsafe.types.UTF8String.IntWrapper [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Word2VecModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterClusterManager [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GBTClassifierWrapper.GBTClassifierWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.network.server.OneForOneStreamManager.StreamState [WARN] Unable to detect inner functions for class:org.apache.spark.sql.RelationalGroupedDataset.GroupByType [WARN] Unable to detect inner functions for class:org.apache.spark.unsafe.types.UTF8String.LongWrapper [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FileFormatContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionModel.IsotonicRegressionModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.$$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.StreamingRelationStrategy [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.ParserUtils.EnhancedLogicalPlan [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetLocationsMultipleBlockIds [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterWorkerFailed [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.MapZipWithCoercion [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.optimizer.PushLeftSemiLeftAntiThroughJoin.PushdownDirection [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IdentifierListContext Error instrumenting class:org.apache.spark.mllib.clustering.DistributedLDAModel$SaveLoadV1_0$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.InMemoryFileIndex.SerializableBlockLocation [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetFilters.ParquetPrimitiveField [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyWithIndexToValueStore [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ColTypeListContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.attribute.AttributeType.Unresolved$1$ [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.ArrayWrappers.ComparableIntArray [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateKeyWatermarkPredicate [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Named [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetNamespacePropertiesContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SortPrefixUtils.NoOpPrefixComparator Error instrumenting class:org.apache.spark.deploy.security.HadoopFSDelegationTokenProvider [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CommentSpecContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAttributeRewriter.VectorAttributeRewriterReader [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.CheckForWorkerTimeOut [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator4$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetDecimalConverter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.IDFModelWriter.$$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.KVTypeInfo.$MethodAccessor [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.ReviveOffers [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.BooleanBitSet.Decoder [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterWorker [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RenameTableColumnContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.Serializer [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetLongDictionaryAwareDecimalConverter [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.OutputCommitCoordinator.OutputCommitCoordinatorEndpoint [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.TreeEnsembleModel.SaveLoadV1_0.EnsembleNodeData [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.DoubleConverter [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManager.ByteBufferBlockStoreUpdater [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.LeastSquaresNESolver [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FunctionNameContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.MinMaxScalerModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.MetricsAggregate [WARN] Unable to detect inner functions for class:org.apache.spark.ml.evaluation.CosineSilhouette.$typecreator2$2 [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.FileBasedWriteAheadLog.LogInfo [WARN] Unable to detect inner functions for class:org.apache.spark.ExecutorAllocationManager.ExecutorAllocationListener Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat Error instrumenting class:org.apache.spark.sql.execution.datasources.binaryfile.BinaryFileFormat$ Error instrumenting class:org.apache.spark.metrics.sink.MetricsServlet [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TablePropertyListContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GBTRegressionModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator1$5 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.util.DataTypeJsonUtils.DataTypeJsonDeserializer [WARN] Unable to detect inner functions for class:org.apache.spark.util.JsonProtocol.SPARK_LISTENER_EVENT_FORMATTED_CLASS_NAMES [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StopWordsRemover.$$typecreator4$1 [WARN] Unable to detect inner functions for class:org.apache.spark.serializer.SerializationDebugger.ListObjectOutputStream [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ProbabilisticClassificationModel.$$typecreator4$1 [WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherProtocol.Message [WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.ExternalBlockStoreClient.$2 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestDriverStatus Error instrumenting class:org.apache.spark.ui.DelegatingServletContextHandler [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.NumNonZeros [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.NullIntolerant [WARN] Unable to detect inner functions for class:org.apache.spark.sql.RelationalGroupedDataset.PivotType [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.RowToColumnConverter.ArrayConverter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.MinMaxScalerModelWriter.Data Error instrumenting class:org.apache.spark.ml.tuning.CrossValidatorModel$CrossValidatorModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.storage.DiskBlockObjectWriter.$ManualCloseBufferedOutputStream$1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0.$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.util.RebaseDateTime.RebaseInfo [WARN] Unable to detect inner functions for class:org.apache.spark.launcher.Main.1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.IntAccessor [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.VariableLengthRowBasedKeyValueBatch.$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.NaiveBayesModel.NaiveBayesModelWriter.$$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.IntegerType.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.status.ElementTrackingStore.WriteQueueResult [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.BooleanBitSet.Encoder [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.DCT.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.QueryPlanningTracker.PhaseSummary [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Poisson [WARN] Unable to detect inner functions for class:org.apache.spark.ml.optim.WeightedLeastSquares.QuasiNewton [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeExternalRowSorter.PrefixComputer.Prefix [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.RowToColumnConverter.StructNullableTypeConverter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.KMeansWrapper.KMeansWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.StringAccessor [WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator1$8 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.RandomForestRegressionModel.$$typecreator2$1 Error instrumenting class:org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$ Error instrumenting class:org.apache.spark.sql.execution.streaming.state.StateStore$ [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.NaiveBayesModel.NaiveBayesModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.LaunchExecutor [WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.RetryingBlockFetcher.1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RelationContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAttributeRewriter.VectorAttributeRewriterWriter.$$typecreator1$3 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.sources.MemorySink.AddedData [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillExecutors [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionModel.IsotonicRegressionModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.StandaloneResourceUtils.StandaloneResourceAllocation [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRestModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Probit [WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.ExternalBlockHandler.$ManagedBufferIterator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SampleMethodContext [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.SubmitDriverResponse [WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.BinaryAccessor [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PartitionSpecLocationContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator9$1 Error instrumenting class:org.apache.spark.sql.execution.streaming.StreamMetadata$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.IntDelta.Encoder [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.NaiveBayes.$$typecreator4$2 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.LocalPrefixSpan.ReversedPrefix [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IdentifierCommentContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.RightSide Error instrumenting class:org.apache.spark.ui.ServerInfo [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StopWordsRemover.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionModel.IsotonicRegressionModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.FileBasedWriteAheadLog.LogInfo Error instrumenting class:org.apache.spark.kafka010.KafkaTokenUtil$KafkaDelegationTokenIdentifier [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveWorker [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SubstringContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IdentifierContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.RFormulaModel.RFormulaModelWriter.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.evaluation.CosineSilhouette.$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.TriggerThreadDump [WARN] Unable to detect inner functions for class:org.apache.spark.network.protocol.Encoders.Strings [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.FileStreamSource.FileEntry [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.WindowsSubstitution [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalAppendOnlyMap.DiskMapIterator [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.GradientBoostedTreesModel.SaveLoadV1_0 Error instrumenting class:org.apache.spark.sql.execution.datasources.NoopCache$ [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.ChiSquareTest.$typecreator4$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeExternalRowSorter.$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutorsOnHost [WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.protocol.BlockTransferMessage.Decoder [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator1$3 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.CholeskySolver [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRestModel.$$typecreator6$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper.GeneralizedLinearRegressionWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StopDriver [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.BlockLocationsAndStatus [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.NaiveBayes.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CurrentDatetimeContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.JoinSelection [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.PrefixSpan.Postfix [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.InMemoryStore.InMemoryLists [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator1$6 [WARN] Unable to detect inner functions for class:org.apache.spark.network.server.TransportRequestHandler.$3 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator14$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.LambdaContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.BucketSpecContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.FileStreamSource.SourceFileRemover [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.DriverStateChanged [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinConditionSplitPredicates [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TransformClauseContext [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.TreeEnsembleModel.SaveLoadV1_0.$typecreator4$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeKVExternalSorter.1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator1$9 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.FloatType.FloatAsIfIntegral [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.IDF.DocumentFrequencyAggregator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DoubleLiteralContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.GaussianMixtureModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.codegen.Block.InlineHelper [WARN] Unable to detect inner functions for class:org.apache.spark.sql.connector.catalog.TableChange.SetProperty [WARN] Unable to detect inner functions for class:org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette.$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveNamespace [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.GBTClassificationModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LogisticRegressionModel.LogisticRegressionModelWriter.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.serializer.SerializationDebugger.SerializationDebugger [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.receiver.BlockGenerator.Block [WARN] Unable to detect inner functions for class:org.apache.spark.BarrierCoordinator.ContextBarrierState [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.CountVectorizerModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.FailureFetchResult [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FunctionCallContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.RowToColumnConverter.CalendarConverter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorSlicer.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeM2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator1$14 [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.ReplicateBlock [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillExecutors Error instrumenting class:org.apache.spark.sql.execution.datasources.v2.text.TextWriteBuilder [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SubqueryExpressionContext [WARN] Unable to detect inner functions for class:org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassReflection [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.debug.DebugQuery [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ImputerModel.ImputerReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateWatermarkPredicates [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateTableClausesContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.RobustScalerModel.RobustScalerModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.DecimalType.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveShuffle [WARN] Unable to detect inner functions for class:org.apache.spark.network.crypto.TransportCipher.EncryptedMessage [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.HashingTF.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.LongDelta.Decoder [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.optimizer.StarSchemaDetection.TableAccessCardinality [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator4$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.PromoteStrings [WARN] Unable to detect inner functions for class:org.apache.spark.sql.connector.catalog.LookupCatalog.SessionCatalogAndIdentifier [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.DirectKafkaRateController [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GBTRegressionModel.$$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.unsafe.map.BytesToBytesMap.$Location [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase.IntIterator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ApplyTransformContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.BisectingKMeansWrapper.BisectingKMeansWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.network.protocol.Encoders.LongArrays [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.RowUpdater [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LocalLDAModel.LocalLDAModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel.MultilayerPerceptronClassificationModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ConstantDefaultContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.DictionaryEncoding.Decoder [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.ArrayWrappers.ComparableByteArray [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.InBlock [WARN] Unable to detect inner functions for class:org.apache.spark.ml.util.DatasetUtils.$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.OldData [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManager.ByteBufferBlockStoreUpdater [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator3$5 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.EnsembleModelReadWrite.$typecreator5$1 [WARN] Unable to detect inner functions for class:org.apache.spark.storage.StorageStatus.RddStorageInfo [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.LateralViewContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.connector.catalog.LookupCatalog.CatalogAndMultipartIdentifier [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAttributeRewriter.VectorAttributeRewriterWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ComplexColTypeContext [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveExecutor [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.InMemoryStore.InMemoryView [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.RepeatedPrimitiveConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator12$3 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ErrorCapturingMultiUnitsIntervalContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.ShortType.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.ResolveHints.ResolveCoalesceHints [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Binarizer.$$typecreator2$1 Error instrumenting class:org.apache.spark.mllib.regression.IsotonicRegressionModel$SaveLoadV1_0$Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LocalLDAModel.LocalLDAModelWriter.$$typecreator1$3 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Inverse [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.stat.test.ChiSqTest.Method [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Tokenizer.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator23$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.StreamingGlobalLimitStrategy [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ClearCacheContext [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalSorter.SpilledFile [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator21$3 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.python.EvaluatePython.StructTypePickler [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.IDFModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel.BucketedRandomProjectionLSHModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.RowToColumnConverter.StringConverter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexerModel.VectorIndexerModelWriter.$$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Sqrt [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.DirectKafkaInputDStreamCheckpointData [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.analysis.DetectAmbiguousSelfJoin.LogicalPlanWithDatasetId [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.KMeansModelReader.$$typecreator5$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TransformArgumentContext [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.internal.plugin.PluginContextImpl.PluginMetricsSource [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.ArrayConverter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.$$typecreator5$4 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.SQLConf.DeprecatedConfig [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.plans.logical.statsEstimation.EstimationUtils.OverlappedRange [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.LastContext [WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SparkSaslClient.1 Error instrumenting class:org.apache.spark.ml.image.SamplePathFilter$ [WARN] Unable to detect inner functions for class:org.apache.spark.graphx.PartitionStrategy.EdgePartition1D [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.ChiSqSelectorModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NotMatchedActionContext [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.UpdateDelegationTokens [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.history.HistoryServerDiskManager.Lease [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.aggregate.HashMapGenerator.Buffer [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManager.RemoteBlockDownloadFileManager [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.LocalDateConverter [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.AddWebUIFilter [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ReconnectWorker [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TablePropertyContext [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerStateResponse [WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.SQLConf.Deprecated [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ResetConfigurationContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IdentifierSeqContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel.BucketedRandomProjectionLSHModelWriter.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveRelations [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.ResolveHints.ResolveCoalesceHints [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.ProgressReporter.ExecutionStats [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.RunLengthEncoding.Encoder [WARN] Unable to detect inner functions for class:org.apache.spark.graphx.PartitionStrategy.EdgePartition2D [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.OldData [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TinyIntLiteralContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TrimContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.StructConverter [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.TimSort.$SortState [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.IsotonicRegressionWrapper.IsotonicRegressionWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.RowToColumnConverter.BasicNullableTypeConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ColumnarBatch.$1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0.Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ClassificationModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetBinaryDictionaryAwareDecimalConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.rules.RuleExecutor.FixedPoint [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator1$14 [WARN] Unable to detect inner functions for class:org.apache.spark.rpc.netty.NettyRpcEnv.FileDownloadChannel [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.FMClassificationModel.FMClassificationModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyWithIndexAndValue [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.FlatMapGroupsWithStateExecHelper.StateManager [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RegularQuerySpecificationContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.BooleanConverter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.FamilyAndLink [WARN] Unable to detect inner functions for class:org.apache.spark.util.sketch.CountMinSketch.Version [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.debug.DebugExec.$ColumnMetrics [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManager.TempFileBasedBlockStoreUpdater [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.dynalloc.ExecutorMonitor.ExecutorIdCollector [WARN] Unable to detect inner functions for class:org.apache.spark.security.CryptoStreamUtils.BaseErrorHandler [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.VectorizedRleValuesReader.1 Error instrumenting class:org.apache.spark.ml.tuning.TrainValidationSplitModel$TrainValidationSplitModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.$typecreator1$3 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.PrefixSpan.Postfix [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LDAModel.$$typecreator1$2 Error instrumenting class:org.apache.spark.sql.catalyst.util.CompressionCodecs$ [WARN] Unable to detect inner functions for class:org.apache.spark.executor.Executor.TaskRunner [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase.RLEIntIterator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateTableHeaderContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.streaming.StreamingQueryListener.QueryProgressEvent [WARN] Unable to detect inner functions for class:org.apache.spark.rdd.HadoopRDD.HadoopMapPartitionsWithSplitRDD [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.DoubleType.DoubleAsIfIntegral [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillExecutor [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.WindowSpecContext [WARN] Unable to detect inner functions for class:org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassMethods [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.PCAModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.attribute.AttributeType.Numeric$1$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TransformContext [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.DoublePrefixComparator Error instrumenting class:org.apache.spark.sql.execution.datasources.orc.OrcColumnarBatchReader [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ExecutorUpdated [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.LoadDataContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.WriteStyle.QuotedStyle [WARN] Unable to detect inner functions for class:org.apache.spark.ml.optim.WeightedLeastSquares.Auto [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV2_0.$typecreator1$4 [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.LevelDB.$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeNNZ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.DateType.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.AFTSurvivalRegressionWrapper.AFTSurvivalRegressionWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.WhenClauseContext [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisteredWorker [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator6$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.MaxAbsScalerModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.Heartbeat [WARN] Unable to detect inner functions for class:org.apache.spark.executor.CoarseGrainedExecutorBackend.Arguments [WARN] Unable to detect inner functions for class:org.apache.spark.launcher.SparkSubmitCommandBuilder.$OptionParser [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ManageResourceContext [WARN] Unable to detect inner functions for class:org.apache.spark.ExecutorAllocationManager.ExecutorAllocationManagerSource [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IntervalLiteralContext [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveBroadcast [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.TreeEnsembleModel.SaveLoadV1_0.Metadata$ [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.PowerIterationClusteringModel.SaveLoadV1_0.$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RealIdentContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ArithmeticOperatorContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AssignmentContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.MultiInsertQueryBodyContext [WARN] Unable to detect inner functions for class:org.apache.spark.network.crypto.TransportCipher.DecryptionHandler [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveBlock [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions.DslAttribute Error instrumenting class:org.apache.spark.sql.execution.streaming.FileSystemBasedCheckpointFileManager [WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.ExternalShuffleBlockResolver.$1 [WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.SimpleDownloadFile.$SimpleDownloadWritableChannel [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.FileStreamSource.SeenFilesMap [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RowFormatSerdeContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.debug.DebugExec.$SetAccumulator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.JoinCriteriaContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ValueExpressionDefaultContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator22$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.MultilayerPerceptronClassifierWrapper.MultilayerPerceptronClassifierWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DropViewContext Error instrumenting class:org.apache.spark.mllib.tree.model.TreeEnsembleModel$SaveLoadV1_0$Metadata [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StrictNonReservedContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator15$3 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.UnregisterApplication [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Link [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleExpressionContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.RowToColumnConverter.DecimalConverter [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.LevelDBTypeInfo.1 [WARN] Unable to detect inner functions for class:org.apache.spark.SparkConf.AlternateConfig Error instrumenting class:org.apache.spark.sql.execution.streaming.FileStreamSourceLog [WARN] Unable to detect inner functions for class:org.apache.spark.util.random.StratifiedSamplingUtils.RandomDataGenerator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.LongType.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.PythonMLLibAPI.$$typecreator1$3 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.NaiveBayesModel.NaiveBayesModelWriter.Data Error instrumenting class:org.apache.spark.WritableFactory$ [WARN] Unable to detect inner functions for class:org.apache.spark.network.client.TransportClient.$StdChannelListener [WARN] Unable to detect inner functions for class:org.apache.spark.api.python.SerDeUtil.AutoBatchedPickler [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.AFTSurvivalRegressionModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveExecutor [WARN] Unable to detect inner functions for class:org.apache.spark.ml.param.shared.SharedParamsCodeGen.ParamDesc Error instrumenting class:org.apache.spark.ui.ServerInfo$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.MultipartIdentifierListContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.IsotonicRegressionWrapper.IsotonicRegressionWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ElementwiseProduct.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.RegexTokenizer.$$typecreator2$2 [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.GetExecutorLossReason [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.InMemoryTableScanExec.ExtractableLiteral [WARN] Unable to detect inner functions for class:org.apache.spark.storage.StorageStatus.NonRddStorageInfo [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.RowToColumnConverter.ShortConverter Error instrumenting class:org.apache.spark.sql.execution.streaming.FileStreamSink$ [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeMean [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRestModel.$$typecreator5$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator9$1 [WARN] Unable to detect inner functions for class:org.apache.spark.util.JsonProtocol.JOB_RESULT_FORMATTED_CLASS_NAMES Error instrumenting class:org.apache.spark.ui.WebUI [WARN] Unable to detect inner functions for class:org.apache.spark.status.KVUtils.KVStoreScalaSerializer [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.NormL1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PrimaryExpressionContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator19$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.IdentityConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.util.StringUtils.PlanStringConcat [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.FlatMapGroupsWithStateExecHelper.StateManagerImplV1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ArithmeticUnaryContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.CLogLog [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCheckResult.TypeCheckSuccess Error instrumenting class:org.apache.spark.sql.execution.streaming.CompactibleFileStreamLog [WARN] Unable to detect inner functions for class:org.apache.spark.ml.evaluation.SquaredEuclideanSilhouette.ClusterStats [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.RowToColumnConverter.BooleanConverter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorIndexer.CategoryStats [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.ResolveSessionCatalog.SessionCatalogAndNamespace [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.FPGrowthModel.SaveLoadV1_0 [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalSorter.SpilledFile [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.$$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase.ValuesReaderIntIterator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator24$2 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.PCAModel.PCAModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SampleByBucketContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.DoubleType.DoubleAsIfIntegral [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SearchedCaseContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetConfigurationContext [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.RevokedLeadership [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.RFormulaModel.RFormulaModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.BatchedWriteAheadLog.Record [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveExecutor [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ColPositionContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.util.QuantileSummaries.Stats [WARN] Unable to detect inner functions for class:org.apache.spark.graphx.lib.SVDPlusPlus.Conf [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.$$typecreator13$2 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GaussianMixtureWrapper.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AlterColumnActionContext [WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.RetryingBlockFetcher.$RetryingBlockFetchListener [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.DoubleType.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.NaiveBayes.$$typecreator4$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SampleByRowsContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.orc.OrcDeserializer.CatalystDataUpdater [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NonReservedContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyWithIndexToValueRowConverter [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0.$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.ElectedLeader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ExistsContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.connector.catalog.NamespaceChange.RemoveProperty [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.util.StringUtils.StringConcat [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.UncompressedInBlock [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator20$3 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GBTRegressorWrapper.GBTRegressorWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RenameTableContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Interaction.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AlterTableAlterColumnContext Error instrumenting class:org.apache.spark.sql.execution.datasources.v2.csv.CSVScan Error instrumenting class:org.apache.spark.executor.ExecutorSource Error instrumenting class:org.apache.spark.sql.execution.datasources.FileFormatWriter$ [WARN] Unable to detect inner functions for class:org.apache.spark.TestUtils.JavaSourceFromString [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.DistributedLDAModel.DistributedWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator7$2 Error instrumenting class:org.apache.spark.sql.execution.datasources.json.MultiLineJsonDataSource$ [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NullLiteralContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TruncateTableContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.Sum [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.adaptive.OptimizeLocalShuffleReader.BroadcastJoinWithShuffleLeft [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StarContext [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RequestExecutors [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.ChiSqSelectorModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowPartitionsContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.DecisionTreeClassificationModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator1$18 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyWithIndexToValueRowConverterFormatV1 [WARN] Unable to detect inner functions for class:org.apache.spark.api.python.BasePythonRunner.ReaderIterator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.connector.catalog.TableChange.AddColumn [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.OutputCommitCoordinator.StageState [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestKillDriver [WARN] Unable to detect inner functions for class:org.apache.spark.sql.SparkSession.Builder [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.python.EvaluatePython.RowPickler [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.TimSort.1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveTables Error instrumenting class:org.apache.spark.input.StreamRecordReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.CatalogImpl.$$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.UpdateTableContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator10$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeExternalRowSorter.RowComparator [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeans.ClusterSummary [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.functions.$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.PassThrough.Decoder [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GBTRegressionModel.$$typecreator2$1 Error instrumenting class:org.apache.spark.sql.execution.datasources.SQLHadoopMapReduceCommitProtocol [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.BasicOperators [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0.$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.DistributedLDAModel.DistributedLDAModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0.$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.connector.catalog.CatalogV2Implicits.PartitionTypeHelper [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.LaunchTask [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.BoundPortsRequest [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.InternalLinearRegressionModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator1$12 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.DataSource.SourceInfo [WARN] Unable to detect inner functions for class:org.apache.spark.launcher.AbstractLauncher.ArgumentValidator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.DoubleType.DoubleIsConflicted [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.SerializerBuildHelper.MapElementInformation Error instrumenting class:org.apache.spark.ml.source.image.ImageFileFormat [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SimpleCaseContext Error instrumenting class:org.apache.spark.sql.catalyst.expressions.codegen.Block$InlineHelper$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveWindowFrame [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NestedConstantListContext Error instrumenting class:org.apache.spark.api.python.JavaToWritableConverter Error instrumenting class:org.apache.spark.sql.execution.datasources.PartitionDirectory$ [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterClusterManager [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Family [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QueryOrganizationContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.FMClassificationModel.FMClassificationModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.FamilyAndLink [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkDirCleanup [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator10$3 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.NaiveBayesModel.NaiveBayesModelWriter.Data Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.IntervalUnitContext Error instrumenting class:org.apache.spark.sql.execution.datasources.v2.TextBasedFileScan [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.$SpillableIterator [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalAppendOnlyMap.ExternalIterator.$StreamBuffer [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator13$1 [WARN] Unable to detect inner functions for class:org.apache.spark.api.python.BasePythonRunner.WriterThread [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.MaxAbsScalerModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator3$2 [WARN] Unable to detect inner functions for class:org.apache.spark.rpc.netty.RpcEndpointVerifier.CheckExistence [WARN] Unable to detect inner functions for class:org.apache.spark.ml.PipelineModel.PipelineModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.util.RebaseDateTime.JsonRebaseRecord [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel.MultilayerPerceptronClassificationModelWriter.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator12$2 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRest.OneVsRestReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.ProbabilisticClassificationModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator1$7 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.NNLSSolver [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeans.ClusterSummary [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.DecisionTreeRegressionModel.DecisionTreeRegressionModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.network.util.NettyUtils.1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateTableLikeContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.python.MLSerDe.DenseVectorPickler [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ColumnPruner.ColumnPrunerWriter [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.InMemoryStore.InMemoryIterator [WARN] Unable to detect inner functions for class:org.apache.spark.api.python.SerDeUtil.ByteArrayConstructor [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinSide [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.columnar.compression.DictionaryEncoding.Encoder [WARN] Unable to detect inner functions for class:org.apache.spark.ml.util.Instrumentation.loggerTags [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.catalog.InMemoryCatalog.TableDesc [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.LSHModel.$$typecreator2$3 [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.dstream.FileInputDStream.FileInputDStreamCheckpointData [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinHashLSHModel.MinHashLSHModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SkewSpecContext [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.GetExecutorLossReason [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator21$2 Error instrumenting class:org.apache.spark.mllib.clustering.GaussianMixtureModel$SaveLoadV1_0$Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.aggregate.ApproxCountDistinctForIntervals.LongArrayInternalRow [WARN] Unable to detect inner functions for class:org.apache.spark.network.client.TransportClient.$3 [WARN] Unable to detect inner functions for class:org.apache.spark.security.CryptoStreamUtils.ErrorHandlingInputStream Error instrumenting class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$StoreFile [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator5$3 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.Projection [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator2$5 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.RandomForestRegressorWrapper.RandomForestRegressorWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.ResolveHints.RemoveAllHints [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.PythonMLLibAPI.$$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.Word2VecModel.SaveLoadV1_0.Data [WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherBackend.BackendConnection [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateTableContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetIntDictionaryAwareDecimalConverter Error instrumenting class:org.apache.spark.mllib.clustering.DistributedLDAModel$SaveLoadV1_0$EdgeData [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.PrefixSpan.Prefix [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.BooleanType.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.$$typecreator5$1 [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetExecutorEndpointRef [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveExecutor [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.HintStatementContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LinearSVCWrapper.LinearSVCWrapperWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.connector.catalog.CatalogV2Implicits.CatalogHelper [WARN] Unable to detect inner functions for class:org.apache.spark.rdd.JdbcRDD.ConnectionFactory [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.OrderedIdentifierListContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.CatalogImpl.$$typecreator1$3 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveInsertInto [WARN] Unable to detect inner functions for class:org.apache.spark.executor.ExecutorMetricsPoller.TCMP Error instrumenting class:org.apache.spark.sql.execution.streaming.CheckpointFileManager$RenameBasedFSDataOutputStream [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator1$6 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.UnsafeKVExternalSorter.KVComparator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.codegen.Block.BlockHelper [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAssembler.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateValueWatermarkPredicate [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveWindowOrder [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.InstantConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.ResolveSessionCatalog.TempViewOrV1Table [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.BlacklistTracker.ExecutorFailureList.TaskId [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Word2VecModelReader.$$typecreator4$2 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.TreeEnsembleModel.SaveLoadV1_0.$typecreator1$1 Error instrumenting class:org.apache.spark.sql.execution.datasources.json.TextInputJsonDataSource$ [WARN] Unable to detect inner functions for class:org.apache.spark.ml.optim.WeightedLeastSquares.Solver [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.KolmogorovSmirnovTest.KolmogorovSmirnovTestResult [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MaxAbsScalerModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TablePropertyValueContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.QueryPlanningTracker.RuleSummary Error instrumenting class:org.apache.spark.ui.ProxyRedirectHandler [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NamedExpressionSeqContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FunctionIdentifierContext [WARN] Unable to detect inner functions for class:org.apache.spark.network.util.LevelDBProvider.1 [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.AddWebUIFilter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AddTableColumnsContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DmlStatementContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleDataTypeContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.joins.BuildLeft [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Wildcard [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetTablePropertiesContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCheckResult.TypeCheckFailure Error instrumenting class:org.apache.spark.sql.execution.datasources.v2.parquet.ParquetScan$ [WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SaslEncryption.EncryptionHandler [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.v2.V2SessionCatalog.TableIdentifierHelper [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionBase.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorSizeHint.$$typecreator4$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DeleteFromTableContext [WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.FailureFetchResult [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowTableContext [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV2_0 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.WindowFrameCoercion [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.RowToColumnConverter.ByteConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveAliases [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.OpenHashSet.FloatHasher [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.SignedPrefixComparatorNullsLast Error instrumenting class:org.apache.spark.sql.execution.datasources.InMemoryFileIndex$ [WARN] Unable to detect inner functions for class:org.apache.spark.serializer.SerializationDebugger.ListObjectOutput [WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherServer.$ServerConnection [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RowFormatContext [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetLocationsAndStatus [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.FPTree.Node [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions.DslString Error instrumenting class:org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore Error instrumenting class:org.apache.spark.api.python.TestOutputValueConverter [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.RadixSortSupport [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.RowToColumnConverter.MapConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.PartitioningUtils.PartitionValues [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.LSHModel.$$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterStateResponse [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator4$3 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DescribeRelationContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ExtractGenerator.AliasedGenerator$ [WARN] Unable to detect inner functions for class:org.apache.spark.ml.PipelineModel.PipelineModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.analysis.DetectAmbiguousSelfJoin.AttrWithCast [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.AFTSurvivalRegressionWrapper.AFTSurvivalRegressionWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV1_0.$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.SparkStrategies.PythonEvals [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.v2.DataSourceV2Implicits.OptionsHelper [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StatementDefaultContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator3$6 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateNamespaceContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveGenerate [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV3_0.$typecreator1$6 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.GBTClassificationModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ReconnectWorker [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Binomial [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArray.ExternalAppendOnlyUnsafeRowArrayIterator [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.BlockLocationsAndStatus [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.InternalLinearRegressionModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QualifiedNameContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.dsl.ExpressionConversions.StringToAttributeConversionHelper [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.MinMaxScalerModelWriter.Data Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.PullOutNondeterministic [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.DriverStateChanged [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.JoinRelationContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetFilters.ParquetSchemaType [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FirstContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InsertIntoTableContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator22$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetTableLocationContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.FlatMapGroupsWithStateExecHelper.StateData [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.LDAWrapper.$$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator18$3 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.LogicalBinaryContext Error instrumenting class:org.apache.spark.SparkEnv$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.ByteAccessor [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.rules.RuleExecutor.Batch [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetStorageStatus [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.rdd.NewHadoopRDD.NewHadoopMapPartitionsWithSplitRDD [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.UnsafeInMemorySorter.1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.StateStoreOps [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.util.QuantileSummaries.Stats [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorSizeHint.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.EdgeData$ [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDe.DenseMatrixPickler [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SubscriptContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.api.r.SQLUtils.RegexContext [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.MasterChangeAcknowledged [WARN] Unable to detect inner functions for class:org.apache.spark.api.python.PythonWorkerFactory.MonitorThread [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.ResolveTableValuedFunctions.ArgumentList [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV2_0.$typecreator1$4 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator1$13 [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.kafka010.KafkaDataConsumer.CachedKafkaDataConsumer [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FunctionTableContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.QueryContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator15$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinHashLSHModel.MinHashLSHModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerStateResponse [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.IsotonicRegressionModel.SaveLoadV1_0.$typecreator1$1 Error instrumenting class:org.apache.spark.deploy.history.EventLogFileWriter$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.LSHModel.$$typecreator1$3 [WARN] Unable to detect inner functions for class:org.apache.spark.storage.DiskBlockObjectWriter.ManualCloseOutputStream Error instrumenting class:org.apache.spark.streaming.StreamingContext$ [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.FeatureHasher.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeWeightSum [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterWorkerResponse [WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.DecimalAccessor [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0.Data$ Error instrumenting class:org.apache.spark.sql.catalyst.catalog.InMemoryCatalog$ [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.PMMLLinearRegressionModelWriter.Data Error instrumenting class:org.apache.spark.streaming.api.java.JavaStreamingContext$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinConditionSplitPredicates [WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.ExternalBlockStoreClient.$1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.recommendation.MatrixFactorizationModel.SaveLoadV1_0 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.DecimalConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator20$1 [WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SaslEncryption.DecryptionHandler [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.InMemoryStore.InstanceList [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.$$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator1$8 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.connector.catalog.TableChange.First [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.RowToColumnConverter.BasicNullableTypeConverter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.RandomForestRegressionModel.$$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.functions.$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.Metric [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.FileFormatWriter.OutputSpec [WARN] Unable to detect inner functions for class:org.apache.spark.serializer.SerializationDebugger.NullOutputStream [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.KillDriverResponse [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator1$3 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0.$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.python.PythonForeachWriter.UnsafeRowBuffer [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.codegen.CodegenContext.MutableStateArrays [WARN] Unable to detect inner functions for class:org.apache.spark.rdd.PipedRDD.NotEqualsFileNameFilter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FromStatementContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableNameContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowViewsContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.codegen.DumpByteCode [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DropTablePartitionsContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.Variance [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.python.WindowInPandasExec.BoundedWindow [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.SparkSubmitUtils.MavenCoordinate [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateTempViewUsingContext [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.BeginRecovery [WARN] Unable to detect inner functions for class:org.apache.spark.rpc.netty.NettyRpcEnv.FileDownloadCallback [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.StringConverter Error instrumenting class:org.apache.spark.sql.execution.datasources.csv.CSVFileFormat [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.continuous.ContinuousQueuedDataReader.EpochMarkerGenerator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateWatermarkPredicates [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.AFTSurvivalRegressionModel.AFTSurvivalRegressionModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.KVTypeInfo.$FieldAccessor [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Power [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CommentNamespaceContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Word2VecModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.ValueAndMatchPair [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.codegen.GenerateUnsafeProjection.Schema [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerDriverStateResponse [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.KolmogorovSmirnovTest.$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.GBTClassificationModel.GBTClassificationModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.VectorAttributeRewriter.VectorAttributeRewriterWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.FileStreamSource.FileStreamSourceCleaner [WARN] Unable to detect inner functions for class:org.apache.spark.serializer.DummySerializerInstance.$1 Error instrumenting class:org.apache.spark.input.ConfigurableCombineFileRecordReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NotMatchedClauseContext [WARN] Unable to detect inner functions for class:org.apache.spark.storage.ShuffleBlockFetcherIterator.FetchRequest [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.FPTree.Summary Error instrumenting class:org.apache.spark.sql.execution.datasources.v2.orc.OrcScan [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveOutputRelation [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.CreateFileFormatContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinExec.OneSideHashJoiner.$AddingProcessedRowToStateCompletionIterator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.python.WindowInPandasExec.UnboundedWindow [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.scheduler.JobScheduler.JobHandler [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.PathInstruction.Named [WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator1$4 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.RandomForestRegressionModel.RandomForestRegressionModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.HandleNullInputsForUDF [WARN] Unable to detect inner functions for class:org.apache.spark.network.protocol.Message.Type [WARN] Unable to detect inner functions for class:org.apache.spark.graphx.impl.VertexPartition.VertexPartitionOpsConstructor [WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.OneForOneBlockFetcher.$DownloadCallback [WARN] Unable to detect inner functions for class:org.apache.spark.internal.io.FileCommitProtocol.EmptyTaskCommitMessage [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Normalizer.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.util.kvstore.LevelDB.TypeAliases [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.DecisionTreeRegressionModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.FileFormatWriter.Empty2Null Error instrumenting class:org.apache.spark.sql.catalyst.expressions.codegen.Block$BlockHelper$ [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeMetric Error instrumenting class:org.apache.spark.sql.execution.streaming.SinkFileStatus$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetArrayConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DropNamespaceContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.FMRegressionModel.FMRegressionModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinHashLSHModel.MinHashLSHModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.HashingTF.HashingTFReader Error instrumenting class:org.apache.spark.sql.execution.datasources.v2.json.JsonWriteBuilder [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ChiSqSelectorModel.ChiSqSelectorModelWriter.$$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.StackCoercion [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ColumnPruner.ColumnPrunerWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.connector.catalog.TableChange.UpdateColumnComment [WARN] Unable to detect inner functions for class:org.apache.spark.api.python.BasePythonRunner.MonitorThread [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.PowerIterationClusteringModel.SaveLoadV1_0.$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.GlobalAggregates Error instrumenting class:org.apache.spark.serializer.SerializationDebugger$ObjectStreamClassMethods$ [WARN] Unable to detect inner functions for class:org.apache.spark.util.SizeEstimator.SearchState [WARN] Unable to detect inner functions for class:org.apache.spark.InternalAccumulator.input [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.StreamingSymmetricHashJoinHelper.JoinStateWatermarkPredicate [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LinearSVCModel.LinearSVCReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.RobustScalerModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.InternalLinearRegressionModelWriter.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.orc.OrcDeserializer.RowUpdater Error instrumenting class:org.apache.spark.status.api.v1.ApiRootResource$ [WARN] Unable to detect inner functions for class:org.apache.spark.InternalAccumulator.shuffleRead [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.FileFormatWriter.Empty2Null [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.WindowFrameContext [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.$typecreator1$4 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ExecutorUpdated [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RetrieveSparkAppConfig [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.UDTConverter Error instrumenting class:org.apache.spark.mllib.clustering.LocalLDAModel$SaveLoadV1_0$Data [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.RandomForestRegressorWrapper.RandomForestRegressorWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InlineTableDefault1Context [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator14$3 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ErrorCapturingIdentifierExtraContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.aggregate.HashMapGenerator.Buffer [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.TimestampType.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.LaunchedExecutor [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.LogisticRegressionModel.LogisticRegressionModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator9$2 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerDriverStateResponse [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyAndNumValues [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.EnsembleModelReadWrite.EnsembleNodeData [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.RowToColumnConverter.StructConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveAggregateFunctions [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.InlineTableDefault2Context [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutors [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.DecisionTreeClassifierWrapper.DecisionTreeClassifierWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.RowToColumnConverter.StructNullableTypeConverter [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ReregisterWithMaster [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.GaussianMixtureModel.SaveLoadV1_0.$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.TimestampAccessor [WARN] Unable to detect inner functions for class:org.apache.spark.util.logging.DriverLogger.DfsAsyncWriter [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.UnsafeInMemorySorter.SortComparator [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.Rating [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.receiver.ReceiverSupervisor.ReceiverState [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.$$typecreator5$2 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RenameTablePartitionContext [WARN] Unable to detect inner functions for class:org.apache.spark.security.CryptoStreamUtils.CryptoParams [WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherProtocol.Stop [WARN] Unable to detect inner functions for class:org.apache.spark.util.sketch.BloomFilter.Version [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.util.DataTypeJsonUtils.DataTypeJsonSerializer [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator24$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.connector.catalog.V1Table.IdentifierHelper [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.NumberContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.VectorizedRleValuesReader.MODE [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.KeyWrapper [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.LongConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AlterViewQueryContext [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator2$2 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.stat.test.ChiSqTest.NullHypothesis [WARN] Unable to detect inner functions for class:org.apache.spark.sql.Encoders.$typecreator1$10 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DescribeFuncNameContext Error instrumenting class:org.apache.spark.SparkContext$ [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.KolmogorovSmirnovTest.$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GaussianMixtureWrapper.GaussianMixtureWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.NormalEquation [WARN] Unable to detect inner functions for class:org.apache.spark.shuffle.sort.UnsafeShuffleWriter.StreamFallbackChannelWrapper Error instrumenting class:org.apache.spark.sql.execution.datasources.CodecStreams$ [WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLContext.implicits [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator1$7 [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.OpenHashMapBasedStateMap.StateInfo [WARN] Unable to detect inner functions for class:org.apache.spark.sql.connector.catalog.LookupCatalog.NonSessionCatalogAndIdentifier [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleStatementContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.BooleanLiteralContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SelectClauseContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.util.RebaseDateTime.RebaseInfo [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.UncompressedInBlockBuilder [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.ArraySortLike.NullOrder [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.FloatType.FloatAsIfIntegral Error instrumenting class:org.apache.spark.sql.execution.command.PathFilterIgnoreNonData [WARN] Unable to detect inner functions for class:org.apache.spark.sql.connector.catalog.TableChange.1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.SQLConf.RemovedConfig [WARN] Unable to detect inner functions for class:org.apache.spark.status.ElementTrackingStore.Trigger [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0.Data [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisteredApplication [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.BlacklistTracker.ExecutorFailureList.TaskId [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRest.OneVsRestWriter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.aggregate.ApproximatePercentile.PercentileDigestSerializer [WARN] Unable to detect inner functions for class:org.apache.spark.security.CryptoStreamUtils.ErrorHandlingOutputStream [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.NaiveBayes.$$typecreator3$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RefreshResourceContext [WARN] Unable to detect inner functions for class:org.apache.spark.unsafe.map.HashMapGrowthStrategy.Doubling [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LocalLDAModel.LocalLDAModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.FromStmtContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.impl.RandomForest.NodeIndexInfo [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.OneHotEncoderModel.OneHotEncoderModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SingleFunctionIdentifierContext [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV3_0.$typecreator1$5 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.FileStreamSource.FileStreamSourceCleaner [WARN] Unable to detect inner functions for class:org.apache.spark.status.KVUtils.MetadataMismatchException [WARN] Unable to detect inner functions for class:org.apache.spark.sql.connector.catalog.LookupCatalog.CatalogAndNamespace [WARN] Unable to detect inner functions for class:org.apache.spark.unsafe.map.BytesToBytesMap.$MapIterator [WARN] Unable to detect inner functions for class:org.apache.spark.ml.optim.QuasiNewtonSolver.NormalEquationCostFun [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.Mean [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowTablesContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Binarizer.$$typecreator4$1 [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetLocationsMultipleBlockIds [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.CountVectorizerModel.CountVectorizerModelWriter.Data Error instrumenting class:org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.$$typecreator1$15 [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RequestExecutors [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.BeginRecovery [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ApplicationRemoved [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.StateStoreHandler [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.DecisionTreeClassificationModel.DecisionTreeClassificationModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.DecisionTreeModelReadWrite.$typecreator4$1 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.WorkerSchedulerStateResponse [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.OpenHashSet.LongHasher [WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.SQLConf.MapKeyDedupPolicy [WARN] Unable to detect inner functions for class:org.apache.spark.serializer.KryoSerializer.PoolWrapper [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RequestMasterState [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.IsotonicRegressionModel.IsotonicRegressionModelWriter.$$typecreator1$3 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.MatchedClauseContext Error instrumenting class:org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport Error instrumenting class:org.apache.spark.ui.SparkUI [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.ResolveHints.ResolveJoinStrategyHints [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveWorker [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.expressions.aggregate.DeclarativeAggregate.RichAttribute Error instrumenting class:org.apache.spark.sql.catalyst.catalog.ExternalCatalogUtils$ [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.SizeTracker.Sample [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ConstantListContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.KMeansModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.orc.OrcShimUtils.VectorizedRowBatchWrap [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV3_0 [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillTask [WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherProtocol.SetAppId [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.ToBlockManagerMaster [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRestModel.OneVsRestModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeL1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.ConcatCoercion [WARN] Unable to detect inner functions for class:org.apache.spark.ml.util.DatasetUtils.$typecreator4$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.ColumnPruner.ColumnPrunerReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveUpCast [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolvePivot [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArray.InMemoryBufferIterator [WARN] Unable to detect inner functions for class:org.apache.spark.executor.CoarseGrainedExecutorBackend.RegisteredExecutor [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.stat.FrequentItems.FreqItemCounter [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.unsafe.sort.PrefixComparators.StringPrefixComparator [WARN] Unable to detect inner functions for class:org.apache.spark.streaming.util.BatchedWriteAheadLog.Record [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.InternalKMeansModelWriter.$$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.feature.Word2VecModel.SaveLoadV1_0.$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.connector.catalog.TableChange.ColumnPosition [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.GenericFileFormatContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SetTableSerDeContext [WARN] Unable to detect inner functions for class:org.apache.spark.shuffle.sort.UnsafeShuffleWriter.MyByteArrayOutputStream [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.WindowRefContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowTblPropertiesContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.LocalLDAModel.LocalLDAModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.TableContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DropTableColumnsContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.UDTConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.ShortConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.connector.catalog.CatalogV2Implicits.NamespaceHelper [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.IfCoercion [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter.ParquetStringConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.streaming.StreamingQueryListener.Event [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.FPGrowth.FreqItemset [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator17$3 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DmlStatementNoWithContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.RandomForestClassificationModel.RandomForestClassificationModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.RegisterApplication [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.NormL2 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.stat.SummaryBuilderImpl.ComputeMin [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.$1 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.history.AppListingListener.MutableApplicationInfo [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator7$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.StatementContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DescribeNamespaceContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.continuous.ContinuousQueuedDataReader.ContinuousRecord [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.RowToColumnConverter.IntConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.AliasedQueryContext [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.LaunchDriver [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.Decimal.DecimalIsConflicted [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.IDFModel.IDFModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegressionModel.$$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.$$typecreator5$5 [WARN] Unable to detect inner functions for class:org.apache.spark.network.sasl.SaslEncryption.EncryptedMessage [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.OutputCommitCoordinator.StageState [WARN] Unable to detect inner functions for class:org.apache.spark.network.shuffle.ExternalShuffleBlockResolver.AppExecId [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetBlockStatus [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.SampleContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PositionContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.Analyzer.ResolveAlterTableChanges [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.CatalystTypeConverters.IntConverter [WARN] Unable to detect inner functions for class:org.apache.spark.sql.connector.catalog.NamespaceChange.SetProperty [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DecimalLiteralContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator16$2 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.api.python.SerDe.SparseVectorPickler [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator8$1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.GaussianMixtureModel.SaveLoadV1_0.Data$ [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.StringIndexerModel.StringIndexModelWriter.$$typecreator1$3 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.DistributedLDAModel.SaveLoadV1_0.$typecreator3$2 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.StandaloneResourceUtils.MutableResourceInfo [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.FileStreamSource.FileEntry [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.BigDecimalLiteralContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALSModel.$$typecreator5$1 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.RegexTokenizer.$$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.GetPeers [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.GeneralizedLinearRegression.Gamma [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.analysis.TypeCoercion.IntegralDivision [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.PMMLLinearRegressionModelWriter.Data Error instrumenting class:org.apache.spark.input.WholeTextFileRecordReader [WARN] Unable to detect inner functions for class:org.apache.spark.ml.recommendation.ALS.RatingBlockBuilder [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.util.StringUtils.StringConcat [WARN] Unable to detect inner functions for class:org.apache.spark.ml.classification.OneVsRestModel.$$typecreator3$1 Error instrumenting class:org.apache.spark.internal.io.HadoopMapReduceCommitProtocol [WARN] Unable to detect inner functions for class:org.apache.spark.sql.SQLImplicits.StringToColumn [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.RemoveBroadcast [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.PredictData [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.MinMaxScalerModel.MinMaxScalerModelWriter [WARN] Unable to detect inner functions for class:org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.RowFormatDelimitedContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.ValueAndMatchPair [WARN] Unable to detect inner functions for class:org.apache.spark.launcher.LauncherProtocol.SetState [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.LocalPrefixSpan.ReversedPrefix [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.OneHotEncoderModel.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.master.MasterMessages.BoundPortsResponse [WARN] Unable to detect inner functions for class:org.apache.spark.security.CryptoStreamUtils.ErrorHandlingReadableChannel [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.debug.DebugStreamQuery [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.continuous.ContinuousQueuedDataReader.DataReaderThread [WARN] Unable to detect inner functions for class:org.apache.spark.status.ElementTrackingStore.LatchedTriggers [WARN] Unable to detect inner functions for class:org.apache.spark.ml.clustering.GaussianMixtureModel.GaussianMixtureModelWriter.Data [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.tree.model.RandomForestModel.SaveLoadV1_0 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.EnsembleModelReadWrite.$typecreator2$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.vectorized.ArrowColumnVector.FloatAccessor [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ExecutorStateChanged [WARN] Unable to detect inner functions for class:org.apache.spark.ml.regression.LinearRegressionModel.LinearRegressionModelReader [WARN] Unable to detect inner functions for class:org.apache.spark.util.collection.ExternalAppendOnlyMap.SpillableIterator [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyToValuePair [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.streaming.state.SymmetricHashJoinStateManager.KeyWithIndexToValueType [WARN] Unable to detect inner functions for class:org.apache.spark.storage.BlockManagerMessages.ReplicateBlock [WARN] Unable to detect inner functions for class:org.apache.spark.network.server.TransportRequestHandler.$1 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.util.MLUtils.$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV1_0.$typecreator1$2 [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.fpm.PrefixSpanModel.SaveLoadV1_0 [WARN] Unable to detect inner functions for class:org.apache.spark.deploy.DeployMessages.ApplicationFinished [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.ScalaReflection.$typecreator13$3 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.r.GBTClassifierWrapper.GBTClassifierWrapperReader [WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.SQLConf.LegacyBehaviorPolicy [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.PrimitiveDataTypeContext [WARN] Unable to detect inner functions for class:org.apache.spark.sql.types.DecimalType.Fixed [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.DescribeFunctionContext [WARN] Unable to detect inner functions for class:org.apache.spark.storage.StorageStatus.RddStorageInfo [WARN] Unable to detect inner functions for class:org.apache.spark.mllib.classification.LogisticRegressionWithLBFGS.$$typecreator1$1 [WARN] Unable to detect inner functions for class:org.apache.spark.sql.execution.RowToColumnConverter.LongConverter Error instrumenting class:org.apache.spark.sql.execution.datasources.text.TextFileFormat [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.$$typecreator1$4 [WARN] Unable to detect inner functions for class:org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData [WARN] Unable to detect inner functions for class:org.apache.spark.sql.internal.SQLConf.StoreAssignmentPolicy [WARN] Unable to detect inner functions for class:org.apache.spark.sql.catalyst.parser.SqlBaseParser.ShowCreateTableContext [WARN] Unable to detect inner functions for class:org.apache.spark.ml.feature.Word2VecModel.Word2VecModelWriter.$$typecreator5$1 Created : .generated-mima-class-excludes in current directory. Created : .generated-mima-member-excludes in current directory. Using /usr/lib/jvm/java-8-openjdk-amd64/ as default JAVA_HOME. Note, this will be overridden by -java-home if it is set. [info] welcome to sbt 1.4.6 (Private Build Java 1.8.0_222) [info] loading settings for project sparkpullrequestbuilder-build from plugins.sbt ... [info] loading project definition from /home/jenkins/workspace/SparkPullRequestBuilder/project [info] resolving key references (36218 settings) ... [info] set current project to spark-parent (in build file:/home/jenkins/workspace/SparkPullRequestBuilder/) [warn] there are 204 keys that are not used by any other settings/tasks: [warn] [warn] * assembly / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * assembly / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * assembly / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * assembly / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * assembly / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * assembly / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * avro / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * avro / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * avro / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * avro / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * avro / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * avro / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * catalyst / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * catalyst / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * catalyst / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * catalyst / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * catalyst / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * catalyst / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * core / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * core / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * core / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * core / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * core / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * core / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * examples / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * examples / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * examples / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * examples / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * examples / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * examples / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * ganglia-lgpl / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * ganglia-lgpl / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * ganglia-lgpl / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * ganglia-lgpl / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * ganglia-lgpl / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * ganglia-lgpl / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * graphx / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * graphx / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * graphx / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * graphx / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * graphx / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * graphx / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * hadoop-cloud / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * hadoop-cloud / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * hadoop-cloud / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * hadoop-cloud / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * hadoop-cloud / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * hadoop-cloud / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * hive / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * hive / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * hive / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * hive / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * hive / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * hive / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * hive-thriftserver / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * hive-thriftserver / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * hive-thriftserver / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * hive-thriftserver / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * hive-thriftserver / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * hive-thriftserver / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * kubernetes / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * kubernetes / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * kubernetes / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * kubernetes / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * kubernetes / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * kubernetes / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * kvstore / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * kvstore / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * kvstore / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * kvstore / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * kvstore / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * kvstore / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * launcher / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * launcher / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * launcher / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * launcher / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * launcher / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * launcher / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * mesos / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * mesos / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * mesos / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * mesos / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * mesos / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * mesos / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * mllib / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * mllib / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * mllib / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * mllib / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * mllib / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * mllib / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * mllib-local / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * mllib-local / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * mllib-local / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * mllib-local / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * mllib-local / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * mllib-local / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * network-common / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * network-common / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * network-common / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * network-common / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * network-common / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * network-common / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * network-shuffle / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * network-shuffle / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * network-shuffle / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * network-shuffle / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * network-shuffle / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * network-shuffle / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * network-yarn / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * network-yarn / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * network-yarn / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * network-yarn / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * network-yarn / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * network-yarn / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * repl / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * repl / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * repl / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * repl / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * repl / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * repl / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * sketch / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * sketch / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * sketch / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * sketch / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * sketch / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * sketch / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * spark / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * spark / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * spark / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * spark / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * spark / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * spark / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * sql / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * sql / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * sql / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * sql / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * sql / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * sql / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * sql-kafka-0-10 / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * sql-kafka-0-10 / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * sql-kafka-0-10 / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * sql-kafka-0-10 / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * sql-kafka-0-10 / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * sql-kafka-0-10 / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * streaming / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * streaming / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * streaming / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * streaming / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * streaming / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * streaming / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * streaming-kafka-0-10 / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * streaming-kafka-0-10 / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * streaming-kafka-0-10 / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * streaming-kafka-0-10 / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * streaming-kafka-0-10 / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * streaming-kafka-0-10 / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * streaming-kafka-0-10-assembly / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * streaming-kafka-0-10-assembly / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * streaming-kafka-0-10-assembly / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * streaming-kafka-0-10-assembly / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * streaming-kafka-0-10-assembly / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * streaming-kafka-0-10-assembly / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * streaming-kinesis-asl / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * streaming-kinesis-asl / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * streaming-kinesis-asl / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * streaming-kinesis-asl / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * streaming-kinesis-asl / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * streaming-kinesis-asl / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * streaming-kinesis-asl-assembly / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * streaming-kinesis-asl-assembly / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * streaming-kinesis-asl-assembly / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * streaming-kinesis-asl-assembly / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * streaming-kinesis-asl-assembly / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * streaming-kinesis-asl-assembly / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * tags / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * tags / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * tags / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * tags / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * tags / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * tags / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * token-provider-kafka-0-10 / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * token-provider-kafka-0-10 / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * token-provider-kafka-0-10 / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * token-provider-kafka-0-10 / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * token-provider-kafka-0-10 / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * token-provider-kafka-0-10 / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * tools / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * tools / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * tools / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * tools / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * tools / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * tools / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * unsafe / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * unsafe / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * unsafe / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * unsafe / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * unsafe / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * unsafe / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * yarn / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * yarn / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * yarn / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * yarn / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * yarn / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * yarn / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] [warn] note: a setting might still be used by a command; to exclude a key from this `lintUnused` check [warn] either append it to `Global / excludeLintKeys` or call .withRank(KeyRanks.Invisible) on the key [info] spark-parent: mimaPreviousArtifacts not set, not analyzing binary compatibility [info] spark-tags: mimaPreviousArtifacts not set, not analyzing binary compatibility [info] spark-kvstore: mimaPreviousArtifacts not set, not analyzing binary compatibility [info] spark-unsafe: mimaPreviousArtifacts not set, not analyzing binary compatibility [info] spark-network-common: mimaPreviousArtifacts not set, not analyzing binary compatibility [info] spark-network-shuffle: mimaPreviousArtifacts not set, not analyzing binary compatibility [info] spark-network-yarn: mimaPreviousArtifacts not set, not analyzing binary compatibility [warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings. [info] spark-tools: mimaPreviousArtifacts not set, not analyzing binary compatibility [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings. [info] spark-ganglia-lgpl: mimaPreviousArtifacts not set, not analyzing binary compatibility [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [info] spark-yarn: mimaPreviousArtifacts not set, not analyzing binary compatibility [warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings. [info] spark-kubernetes: mimaPreviousArtifacts not set, not analyzing binary compatibility [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [info] spark-mesos: mimaPreviousArtifacts not set, not analyzing binary compatibility [info] spark-token-provider-kafka-0-10: mimaPreviousArtifacts not set, not analyzing binary compatibility [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [info] spark-streaming-kinesis-asl: mimaPreviousArtifacts not set, not analyzing binary compatibility [info] spark-catalyst: mimaPreviousArtifacts not set, not analyzing binary compatibility [info] spark-streaming-kinesis-asl-assembly: mimaPreviousArtifacts not set, not analyzing binary compatibility [warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings. [info] spark-hadoop-cloud: mimaPreviousArtifacts not set, not analyzing binary compatibility [info] spark-sql-kafka-0-10: mimaPreviousArtifacts not set, not analyzing binary compatibility [warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings. [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [info] spark-streaming-kafka-0-10-assembly: mimaPreviousArtifacts not set, not analyzing binary compatibility [warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings. [info] spark-hive: mimaPreviousArtifacts not set, not analyzing binary compatibility [info] spark-avro: mimaPreviousArtifacts not set, not analyzing binary compatibility [warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings. [info] spark-repl: mimaPreviousArtifacts not set, not analyzing binary compatibility [warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings. [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [info] spark-hive-thriftserver: mimaPreviousArtifacts not set, not analyzing binary compatibility [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [info] spark-examples: mimaPreviousArtifacts not set, not analyzing binary compatibility [warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings. [info] spark-assembly: mimaPreviousArtifacts not set, not analyzing binary compatibility [success] Total time: 50 s, completed Jan 17, 2021 9:11:58 AM [info] Building Spark assembly using SBT with these arguments: -Phadoop-3.2 -Phive-2.3 -Phive-thriftserver -Pkubernetes -Pmesos -Pspark-ganglia-lgpl -Pkinesis-asl -Phive -Pyarn -Phadoop-cloud assembly/package Using /usr/lib/jvm/java-8-openjdk-amd64/ as default JAVA_HOME. Note, this will be overridden by -java-home if it is set. [info] welcome to sbt 1.4.6 (Private Build Java 1.8.0_222) [info] loading settings for project sparkpullrequestbuilder-build from plugins.sbt ... [info] loading project definition from /home/jenkins/workspace/SparkPullRequestBuilder/project [info] resolving key references (36224 settings) ... [info] set current project to spark-parent (in build file:/home/jenkins/workspace/SparkPullRequestBuilder/) [warn] there are 204 keys that are not used by any other settings/tasks: [warn] [warn] * assembly / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * assembly / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * assembly / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * assembly / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * assembly / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * assembly / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * avro / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * avro / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * avro / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * avro / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * avro / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * avro / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * catalyst / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * catalyst / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * catalyst / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * catalyst / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * catalyst / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * catalyst / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * core / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * core / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * core / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * core / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * core / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * core / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * examples / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * examples / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * examples / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * examples / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * examples / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * examples / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * ganglia-lgpl / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * ganglia-lgpl / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * ganglia-lgpl / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * ganglia-lgpl / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * ganglia-lgpl / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * ganglia-lgpl / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * graphx / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * graphx / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * graphx / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * graphx / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * graphx / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * graphx / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * hadoop-cloud / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * hadoop-cloud / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * hadoop-cloud / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * hadoop-cloud / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * hadoop-cloud / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * hadoop-cloud / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * hive / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * hive / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * hive / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * hive / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * hive / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * hive / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * hive-thriftserver / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * hive-thriftserver / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * hive-thriftserver / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * hive-thriftserver / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * hive-thriftserver / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * hive-thriftserver / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * kubernetes / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * kubernetes / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * kubernetes / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * kubernetes / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * kubernetes / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * kubernetes / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * kvstore / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * kvstore / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * kvstore / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * kvstore / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * kvstore / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * kvstore / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * launcher / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * launcher / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * launcher / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * launcher / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * launcher / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * launcher / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * mesos / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * mesos / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * mesos / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * mesos / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * mesos / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * mesos / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * mllib / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * mllib / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * mllib / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * mllib / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * mllib / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * mllib / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * mllib-local / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * mllib-local / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * mllib-local / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * mllib-local / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * mllib-local / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * mllib-local / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * network-common / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * network-common / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * network-common / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * network-common / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * network-common / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * network-common / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * network-shuffle / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * network-shuffle / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * network-shuffle / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * network-shuffle / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * network-shuffle / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * network-shuffle / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * network-yarn / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * network-yarn / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * network-yarn / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * network-yarn / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * network-yarn / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * network-yarn / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * repl / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * repl / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * repl / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * repl / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * repl / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * repl / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * sketch / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * sketch / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * sketch / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * sketch / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * sketch / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * sketch / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * spark / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * spark / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * spark / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * spark / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * spark / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * spark / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * sql / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * sql / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * sql / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * sql / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * sql / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * sql / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * sql-kafka-0-10 / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * sql-kafka-0-10 / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * sql-kafka-0-10 / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * sql-kafka-0-10 / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * sql-kafka-0-10 / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * sql-kafka-0-10 / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * streaming / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * streaming / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * streaming / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * streaming / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * streaming / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * streaming / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * streaming-kafka-0-10 / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * streaming-kafka-0-10 / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * streaming-kafka-0-10 / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * streaming-kafka-0-10 / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * streaming-kafka-0-10 / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * streaming-kafka-0-10 / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * streaming-kafka-0-10-assembly / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * streaming-kafka-0-10-assembly / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * streaming-kafka-0-10-assembly / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * streaming-kafka-0-10-assembly / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * streaming-kafka-0-10-assembly / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * streaming-kafka-0-10-assembly / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * streaming-kinesis-asl / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * streaming-kinesis-asl / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * streaming-kinesis-asl / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * streaming-kinesis-asl / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * streaming-kinesis-asl / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * streaming-kinesis-asl / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * streaming-kinesis-asl-assembly / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * streaming-kinesis-asl-assembly / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * streaming-kinesis-asl-assembly / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * streaming-kinesis-asl-assembly / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * streaming-kinesis-asl-assembly / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * streaming-kinesis-asl-assembly / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * tags / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * tags / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * tags / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * tags / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * tags / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * tags / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * token-provider-kafka-0-10 / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * token-provider-kafka-0-10 / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * token-provider-kafka-0-10 / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * token-provider-kafka-0-10 / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * token-provider-kafka-0-10 / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * token-provider-kafka-0-10 / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * tools / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * tools / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * tools / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * tools / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * tools / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * tools / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * unsafe / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * unsafe / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * unsafe / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * unsafe / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * unsafe / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * unsafe / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * yarn / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * yarn / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * yarn / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * yarn / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * yarn / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * yarn / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] [warn] note: a setting might still be used by a command; to exclude a key from this `lintUnused` check [warn] either append it to `Global / excludeLintKeys` or call .withRank(KeyRanks.Invisible) on the key [warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings. [warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings. [warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings. [warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings. [warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings. [warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings. [warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings. [warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings. [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [success] Total time: 33 s, completed Jan 17, 2021 9:12:42 AM ======================================================================== Running Spark unit tests ======================================================================== [info] Running Spark tests using SBT with these arguments: -Phadoop-3.2 -Phive-2.3 -Phive-thriftserver -Phive -Dtest.exclude.tags=org.apache.spark.tags.ExtendedHiveTest,org.apache.spark.tags.ExtendedYarnTest mllib/test sql/test hive-thriftserver/test sql-kafka-0-10/test catalyst/test avro/test repl/test examples/test hive/test Using /usr/lib/jvm/java-8-openjdk-amd64/ as default JAVA_HOME. Note, this will be overridden by -java-home if it is set. [info] welcome to sbt 1.4.6 (Private Build Java 1.8.0_222) [info] loading settings for project sparkpullrequestbuilder-build from plugins.sbt ... [info] loading project definition from /home/jenkins/workspace/SparkPullRequestBuilder/project [info] resolving key references (28195 settings) ... [info] set current project to spark-parent (in build file:/home/jenkins/workspace/SparkPullRequestBuilder/) [warn] there are 156 keys that are not used by any other settings/tasks: [warn] [warn] * assembly / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * assembly / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * assembly / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * assembly / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * assembly / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * assembly / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * avro / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * avro / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * avro / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * avro / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * avro / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * avro / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * catalyst / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * catalyst / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * catalyst / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * catalyst / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * catalyst / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * catalyst / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * core / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * core / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * core / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * core / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * core / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * core / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * examples / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * examples / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * examples / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * examples / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * examples / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * examples / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * graphx / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * graphx / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * graphx / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * graphx / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * graphx / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * graphx / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * hive / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * hive / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * hive / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * hive / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * hive / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * hive / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * hive-thriftserver / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * hive-thriftserver / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * hive-thriftserver / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * hive-thriftserver / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * hive-thriftserver / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * hive-thriftserver / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * kvstore / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * kvstore / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * kvstore / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * kvstore / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * kvstore / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * kvstore / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * launcher / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * launcher / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * launcher / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * launcher / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * launcher / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * launcher / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * mllib / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * mllib / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * mllib / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * mllib / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * mllib / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * mllib / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * mllib-local / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * mllib-local / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * mllib-local / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * mllib-local / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * mllib-local / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * mllib-local / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * network-common / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * network-common / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * network-common / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * network-common / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * network-common / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * network-common / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * network-shuffle / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * network-shuffle / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * network-shuffle / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * network-shuffle / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * network-shuffle / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * network-shuffle / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * repl / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * repl / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * repl / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * repl / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * repl / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * repl / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * sketch / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * sketch / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * sketch / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * sketch / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * sketch / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * sketch / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * spark / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * spark / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * spark / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * spark / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * spark / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * spark / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * sql / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * sql / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * sql / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * sql / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * sql / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * sql / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * sql-kafka-0-10 / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * sql-kafka-0-10 / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * sql-kafka-0-10 / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * sql-kafka-0-10 / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * sql-kafka-0-10 / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * sql-kafka-0-10 / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * streaming / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * streaming / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * streaming / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * streaming / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * streaming / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * streaming / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * streaming-kafka-0-10 / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * streaming-kafka-0-10 / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * streaming-kafka-0-10 / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * streaming-kafka-0-10 / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * streaming-kafka-0-10 / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * streaming-kafka-0-10 / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * streaming-kafka-0-10-assembly / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * streaming-kafka-0-10-assembly / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * streaming-kafka-0-10-assembly / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * streaming-kafka-0-10-assembly / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * streaming-kafka-0-10-assembly / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * streaming-kafka-0-10-assembly / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * tags / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * tags / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * tags / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * tags / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * tags / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * tags / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * token-provider-kafka-0-10 / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * token-provider-kafka-0-10 / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * token-provider-kafka-0-10 / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * token-provider-kafka-0-10 / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * token-provider-kafka-0-10 / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * token-provider-kafka-0-10 / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * tools / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * tools / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * tools / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * tools / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * tools / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * tools / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] * unsafe / Compile / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1001 [warn] * unsafe / M2r / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:285 [warn] * unsafe / Sbt / publishMavenStyle [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:286 [warn] * unsafe / Test / checkstyle / javaSource [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:1002 [warn] * unsafe / scalaStyleOnCompile / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:188 [warn] * unsafe / scalaStyleOnTest / logLevel [warn] +- /home/jenkins/workspace/SparkPullRequestBuilder/project/SparkBuild.scala:189 [warn] [warn] note: a setting might still be used by a command; to exclude a key from this `lintUnused` check [warn] either append it to `Global / excludeLintKeys` or call .withRank(KeyRanks.Invisible) on the key [warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings. [warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings. [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [info] LinearSVCSuite: [info] LogisticRegressionSuite: [info] ElementwiseProductSuite: [info] - export test data into CSV format !!! IGNORED !!! [info] - export test data into CSV format !!! IGNORED !!! [info] - params (105 milliseconds) [info] - streaming transform (5 seconds, 635 milliseconds) [info] - read/write (771 milliseconds) [info] - logistic regression: default params (3 seconds, 243 milliseconds) [info] KMeansSuite: [info] - single cluster (2 seconds, 654 milliseconds) [info] - fewer distinct points than clusters (355 milliseconds) [info] - logistic regression: illegal params (3 seconds, 529 milliseconds) [info] - empty probabilityCol or predictionCol (2 seconds, 289 milliseconds) [info] - unique cluster centers (3 seconds, 398 milliseconds) [info] - Linear SVC binary classification (10 seconds, 798 milliseconds) [info] - deterministic initialization (1 second, 98 milliseconds) [info] - single cluster with big dataset (1 second, 180 milliseconds) [info] - check summary types for binary and multiclass (4 seconds, 572 milliseconds) [info] - setThreshold, getThreshold (9 milliseconds) [info] - single cluster with sparse data (1 second, 379 milliseconds) [info] - k-means|| initialization (370 milliseconds) [info] - two clusters (626 milliseconds) [info] - model save/load (3 seconds, 165 milliseconds) [info] - Initialize using given cluster centers (12 milliseconds) [info] - Kryo class register (20 milliseconds) [info] MatrixFactorizationModelSuite: [info] - constructor (258 milliseconds) [info] - save/load (1 second, 424 milliseconds) [info] - invalid user and product (267 milliseconds) [info] - batch predict API recommendProductsForUsers (145 milliseconds) [info] - batch predict API recommendUsersForProducts (86 milliseconds) [info] - thresholds prediction (6 seconds, 719 milliseconds) [info] DecisionTreeClassifierSuite: [info] - params (28 milliseconds) [info] - Binary classification stump with ordered categorical features (1 second, 334 milliseconds) [info] - logistic regression doesn't fit intercept when fitIntercept is off (1 second, 526 milliseconds) [info] - logistic regression with setters (2 seconds, 220 milliseconds) [info] - Binary classification stump with fixed labels 0,1 for Entropy,Gini (2 seconds, 716 milliseconds) [info] - Multiclass classification stump with 3-ary (unordered) categorical features (542 milliseconds) [info] - Binary classification stump with 1 continuous feature, to check off-by-1 error (458 milliseconds) [info] - Binary classification stump with 2 continuous features (423 milliseconds) [info] - Multiclass classification stump with unordered categorical features, with just enough bins (445 milliseconds) [info] - Multiclass classification stump with continuous features (783 milliseconds) [info] - Multiclass classification stump with continuous + unordered categorical features (915 milliseconds) [info] - Multiclass classification stump with 10-ary (ordered) categorical features (675 milliseconds) [info] - Multiclass classification tree with 10-ary (ordered) categorical features, with just enough bins (637 milliseconds) [info] - split must satisfy min instances per node requirements (519 milliseconds) [info] - do not choose split that does not satisfy min instance per node requirements (390 milliseconds) [info] - split must satisfy min info gain requirements (421 milliseconds) [info] - multinomial logistic regression: Predictor, Classifier methods (7 seconds, 935 milliseconds) [info] - binary logistic regression: Predictor, Classifier methods (4 seconds, 100 milliseconds) [info] - prediction on single instance (2 seconds, 474 milliseconds) [info] - Linear SVC binary classification with regularization (28 seconds, 844 milliseconds) [info] - params (16 milliseconds) [info] - predictRaw and predictProbability (9 seconds, 36 milliseconds) [info] - linear svc: default params (877 milliseconds) [info] - prediction on single instance (825 milliseconds) [info] - training with 1-category categorical feature (231 milliseconds) [info] - Feature importance with toy data (231 milliseconds) [info] - model support predict leaf index (90 milliseconds) [info] - should support all NumericType labels and not support other types (1 second, 775 milliseconds) [info] - LinearSVC threshold acts on rawPrediction (3 seconds, 283 milliseconds) [info] - Fitting without numClasses in metadata (635 milliseconds) [info] - linear svc doesn't fit intercept when fitIntercept is off (1 second, 15 milliseconds) [info] - sparse coefficients in HingeAggregator (11 milliseconds) [info] - training with sample weights (16 seconds, 288 milliseconds) [info] - read/write (4 seconds, 818 milliseconds) [info] - SPARK-20043: ImpurityCalculator builder fails for uppercase impurity type Gini in model read/write (1 second, 245 milliseconds) [info] - SPARK-33398: Load DecisionTreeClassificationModel prior to Spark 3.0 (509 milliseconds) [info] PCASuite: [info] - params (10 milliseconds) [info] - pca (2 seconds, 160 milliseconds) [info] - PCA read/write (284 milliseconds) [info] - PCAModel read/write (993 milliseconds) [info] KMeansClusterSuite: [info] - task size should be small in both training and prediction (6 seconds, 591 milliseconds) [info] MultilabelMetricsSuite: [info] - Multilabel evaluation metrics (58 milliseconds) [info] CorrelationSuite: [info] - corr(X) default, pearson (318 milliseconds) [info] - corr(X) spearman (312 milliseconds) [info] StandardScalerSuite: [info] - params (4 milliseconds) [info] - Standardization with default parameter (697 milliseconds) [info] - linearSVC with sample weights (35 seconds, 429 milliseconds) [info] - Standardization with setter (1 second, 421 milliseconds) [info] - sparse data and withMean (408 milliseconds) [info] - StandardScaler read/write (292 milliseconds) [info] - StandardScalerModel read/write (916 milliseconds) [info] MaxAbsScalerSuite: [info] - MaxAbsScaler fit basic case (531 milliseconds) [info] - MaxAbsScaler read/write (295 milliseconds) [info] - MaxAbsScalerModel read/write (910 milliseconds) [info] FPTreeSuite: [info] - add transaction (9 milliseconds) [info] - merge tree (3 milliseconds) [info] - extract freq itemsets (4 milliseconds) [info] PipelineSuite: [info] - pipeline (1 second, 335 milliseconds) [info] - pipeline with duplicate stages (2 milliseconds) [info] - Pipeline.copy (4 milliseconds) [info] - PipelineModel.copy (0 milliseconds) [info] - pipeline model constructors (0 milliseconds) [info] - Pipeline read/write (719 milliseconds) [info] - Pipeline read/write with non-Writable stage (3 milliseconds) [info] - PipelineModel read/write (693 milliseconds) [info] - PipelineModel read/write: getStagePath (1 millisecond) [info] - PipelineModel read/write with non-Writable stage (1 millisecond) [info] - pipeline validateParams (36 milliseconds) [info] - Pipeline.setStages should handle Java Arrays being non-covariant (0 milliseconds) [info] LDASuite: [info] - default parameters (6 milliseconds) [info] - set parameters (1 millisecond) [info] - parameters validation (28 milliseconds) [info] - fit & transform with Online LDA (1 second, 534 milliseconds) [info] - fit & transform with EM LDA (678 milliseconds) [info] - read/write LocalLDAModel (2 seconds, 712 milliseconds) [info] - LogisticRegression on blocks (54 seconds, 400 milliseconds) [info] - coefficients and intercept methods (449 milliseconds) [info] - sparse coefficients in LogisticAggregator (12 milliseconds) [info] - overflow prediction for multiclass (391 milliseconds) [info] - LinearSVC on blocks (13 seconds, 935 milliseconds) [info] - read/write DistributedLDAModel (3 seconds, 101 milliseconds) [info] - EM LDA checkpointing: save last checkpoint (658 milliseconds) [info] - prediction on single instance (2 seconds, 337 milliseconds) [info] - EM LDA checkpointing: remove last checkpoint (608 milliseconds) [info] - EM LDA disable checkpointing (328 milliseconds) [info] - binary logistic regression with intercept without regularization (3 seconds, 374 milliseconds) [info] - string params should be case-insensitive (2 seconds, 921 milliseconds) [info] - LDA with Array input (948 milliseconds) [info] NumericParserSuite: [info] - parser (3 milliseconds) [info] - parser with whitespaces (0 milliseconds) [info] MatricesSuite: [info] - kryo class register (107 milliseconds) [info] - dense matrix construction (0 milliseconds) [info] - dense matrix construction with wrong dimension (1 millisecond) [info] - sparse matrix construction (30 milliseconds) [info] - sparse matrix construction with wrong number of elements (1 millisecond) [info] - index in matrices incorrect input (4 milliseconds) [info] - equals (7 milliseconds) [info] - matrix copies are deep copies (1 millisecond) [info] - matrix indexing and updating (1 millisecond) [info] - toSparse, toDense (1 millisecond) [info] - map, update (2 milliseconds) [info] - transpose (1 millisecond) [info] - foreachActive (1 millisecond) [info] - horzcat, vertcat, eye, speye (12 milliseconds) [info] - zeros (1 millisecond) [info] - ones (1 millisecond) [info] - eye (1 millisecond) [info] - rand (32 milliseconds) [info] - randn (2 milliseconds) [info] - diag (0 milliseconds) [info] - sprand (5 milliseconds) [info] - sprandn (2 milliseconds) [info] - MatrixUDT (2 milliseconds) [info] - toString (6 milliseconds) [info] - numNonzeros and numActives (1 millisecond) [info] - fromBreeze with sparse matrix (1 millisecond) [info] - Test FromBreeze when Breeze.CSCMatrix.rowIndices has trailing zeros. - SPARK-20687 (4 milliseconds) [info] - row/col iterator (4 milliseconds) [info] - conversions between new local linalg and mllib linalg (2 milliseconds) [info] - implicit conversions between new local linalg and mllib linalg (1 millisecond) [info] MultivariateGaussianSuite: [info] - univariate (21 milliseconds) [info] - multivariate (7 milliseconds) [info] - multivariate degenerate (1 millisecond) [info] - SPARK-11302 (2 milliseconds) [info] - Kryo class register (8 milliseconds) [info] StreamingLogisticRegressionSuite: [info] - binary logistic regression with intercept without regularization with bound (9 seconds, 277 milliseconds) [info] - parameter accuracy (8 seconds, 183 milliseconds) [info] - binary logistic regression without intercept without regularization (3 seconds, 458 milliseconds) [info] - linearSVC comparison with R e1071 and scikit-learn (14 seconds, 419 milliseconds) [info] - binary logistic regression without intercept without regularization with bound (1 second, 985 milliseconds) [info] - summary and training summary (2 seconds, 561 milliseconds) [info] - parameter convergence (8 seconds, 125 milliseconds) [info] - predictions (395 milliseconds) [info] - linearSVC training summary totalIterations (4 seconds, 984 milliseconds) [info] - binary logistic regression with intercept with L1 regularization (6 seconds, 337 milliseconds) [info] - training and prediction (2 seconds, 598 milliseconds) [info] - handling empty RDDs in a stream (694 milliseconds) [info] RandomDataGeneratorSuite: [info] - read/write: SVM (3 seconds, 56 milliseconds) [info] - UniformGenerator (75 milliseconds) [info] - StandardNormalGenerator (102 milliseconds) [info] - LogNormalGenerator (481 milliseconds) [info] - binary logistic regression without intercept with L1 regularization (4 seconds, 727 milliseconds) [info] - PoissonGenerator (2 seconds, 340 milliseconds) [info] - ExponentialGenerator (368 milliseconds) [info] - GammaGenerator (521 milliseconds) [info] - binary logistic regression with intercept with L2 regularization (2 seconds, 838 milliseconds) [info] - WeibullGenerator (800 milliseconds) [info] JsonVectorConverterSuite: [info] - toJson/fromJson (4 milliseconds) [info] LogisticRegressionClusterSuite: [info] - binary logistic regression with intercept with L2 regularization with bound (2 seconds, 120 milliseconds) [info] - binary logistic regression without intercept with L2 regularization (1 second, 835 milliseconds) [info] - binary logistic regression without intercept with L2 regularization with bound (1 second, 576 milliseconds) [info] - task size should be small in both training and prediction using SGD optimizer (5 seconds, 918 milliseconds) [info] - task size should be small in both training and prediction using LBFGS optimizer (2 seconds, 857 milliseconds) [info] RegressionEvaluatorSuite: [info] - params (3 milliseconds) [info] - Regression Evaluator: default params (462 milliseconds) [info] - read/write (331 milliseconds) [info] - should support all NumericType labels and not support other types (347 milliseconds) [info] - getMetrics (400 milliseconds) [info] RankingMetricsSuite: [info] - Ranking metrics: MAP, NDCG, Recall (350 milliseconds) [info] - MAP, NDCG, Recall with few predictions (SPARK-14886) (91 milliseconds) [info] StreamingKMeansSuite: [info] - accuracy for single center and equivalence to grand average (886 milliseconds) [info] - accuracy for two centers (814 milliseconds) [info] - detecting dying clusters (807 milliseconds) [info] - SPARK-7946 setDecayFactor (1 millisecond) [info] MLUtilsSuite: [info] - epsilon computation (1 millisecond) [info] - fast squared distance (26 milliseconds) [info] - loadLibSVMFile (205 milliseconds) [info] - loadLibSVMFile throws IllegalArgumentException when indices is zero-based (105 milliseconds) [info] - loadLibSVMFile throws IllegalArgumentException when indices is not in ascending order (59 milliseconds) [info] - saveAsLibSVMFile (115 milliseconds) [info] - appendBias (1 millisecond) [info] - kFold (12 seconds, 927 milliseconds) [info] - loadVectors (171 milliseconds) [info] - loadLabeledPoints (167 milliseconds) [info] - log1pExp (0 milliseconds) [info] - convertVectorColumnsToML (225 milliseconds) [info] - convertVectorColumnsFromML (180 milliseconds) [info] - convertMatrixColumnsToML (204 milliseconds) [info] - binary logistic regression with intercept with ElasticNet regularization (22 seconds, 756 milliseconds) [info] - convertMatrixColumnsFromML (206 milliseconds) [info] - kFold with fold column (627 milliseconds) [info] - kFold with fold column: invalid fold numbers (976 milliseconds) [info] SQLDataTypesSuite: [info] - sqlDataTypes (2 milliseconds) [info] GradientBoostedTreesSuite: [info] - binary logistic regression without intercept with ElasticNet regularization (8 seconds, 833 milliseconds) [info] - binary logistic regression with intercept with strong L1 regularization (939 milliseconds) [info] - runWithValidation stops early and performs better on a validation dataset (8 seconds, 176 milliseconds) [info] RDDFunctionsSuite: [info] - multinomial logistic regression with intercept with strong L1 regularization (1 second, 297 milliseconds) [info] - sliding (4 seconds, 149 milliseconds) [info] - sliding with empty partitions (37 milliseconds) [info] GradientDescentClusterSuite: [info] - multinomial logistic regression with intercept without regularization (6 seconds, 888 milliseconds) [info] - multinomial logistic regression with zero variance (SPARK-21681) (1 second, 42 milliseconds) [info] - task size should be small (4 seconds, 795 milliseconds) [info] MulticlassClassificationEvaluatorSuite: [info] - params (7 milliseconds) [info] - read/write (347 milliseconds) [info] - should support all NumericType labels and not support other types (431 milliseconds) [info] - evaluation metrics (120 milliseconds) [info] - MulticlassClassificationEvaluator support logloss (120 milliseconds) [info] - getMetrics (267 milliseconds) [info] InstanceSuite: [info] - Kryo class register (19 milliseconds) [info] - InstanceBlock: check correctness (4 milliseconds) [info] - InstanceBlock: blokify with max memory usage (74 milliseconds) [info] DCTSuite: [info] - forward transform of discrete cosine matches jTransforms result (662 milliseconds) [info] - inverse transform of discrete cosine matches jTransforms result (432 milliseconds) [info] - read/write (311 milliseconds) [info] KernelDensitySuite: [info] - kernel density single sample (32 milliseconds) [info] - kernel density multiple samples (12 milliseconds) [info] BreezeVectorConversionSuite: [info] - dense to breeze (0 milliseconds) [info] - sparse to breeze (0 milliseconds) [info] - dense breeze to vector (1 millisecond) [info] - sparse breeze to vector (0 milliseconds) [info] - sparse breeze with partially-used arrays to vector (0 milliseconds) [info] LibSVMRelationSuite: [info] - Propagate Hadoop configs from libsvm options to underlying file system (985 milliseconds) [info] - select as sparse vector (161 milliseconds) [info] - select as dense vector (231 milliseconds) [info] - illegal vector types (7 milliseconds) [info] - select a vector with specifying the longer dimension (76 milliseconds) [info] - case insensitive option (60 milliseconds) [info] - write libsvm data and read it again (413 milliseconds) [info] - write libsvm data failed due to invalid schema (24 milliseconds) [info] - write libsvm data from scratch and read it again (273 milliseconds) [info] - select features from libsvm relation (218 milliseconds) [info] - create libsvmTable table without schema (227 milliseconds) [info] - create libsvmTable table without schema and path (23 milliseconds) [info] - SPARK-32815: Test LibSVM data source on file paths with glob metacharacters (308 milliseconds) [info] IsotonicRegressionSuite: [info] - isotonic regression predictions (499 milliseconds) [info] - antitonic regression predictions (444 milliseconds) [info] - params validation (106 milliseconds) [info] - default params (153 milliseconds) [info] - set parameters (0 milliseconds) [info] - missing column (111 milliseconds) [info] - vector features column with feature index (427 milliseconds) [info] - read/write (1 second, 301 milliseconds) [info] - should support all NumericType labels and weights, and not support other types (994 milliseconds) [info] BinaryClassificationEvaluatorSuite: [info] - params (2 milliseconds) [info] - read/write (297 milliseconds) [info] - should accept both vector and double raw prediction col (381 milliseconds) [info] - should accept weight column (415 milliseconds) [info] - should support all NumericType labels and not support other types (969 milliseconds) [info] - getMetrics (515 milliseconds) [info] AttributeGroupSuite: [info] - attribute group (14 milliseconds) [info] - attribute group without attributes (0 milliseconds) [info] CorrelationSuite: [info] - corr(x, y) pearson, 1 value in data (173 milliseconds) [info] - corr(x, y) default, pearson (420 milliseconds) [info] - corr(x, y) spearman (666 milliseconds) [info] - corr(X) default, pearson (127 milliseconds) [info] - corr(X) spearman (169 milliseconds) [info] - method identification (1 millisecond) [info] - Pearson correlation of very large uncorrelated values (SPARK-14533) !!! IGNORED !!! [info] MultilabelClassificationEvaluatorSuite: [info] - params (3 milliseconds) [info] - evaluation metrics (180 milliseconds) [info] - read/write (309 milliseconds) [info] - getMetrics (465 milliseconds) [info] MultiClassSummarizerSuite: [info] - MultiClassSummarizer (3 milliseconds) [info] - MultiClassSummarizer with weighted samples (1 millisecond) [info] StopwatchSuite: [info] - LocalStopwatch (21 milliseconds) [info] - DistributedStopwatch on driver (12 milliseconds) [info] - DistributedStopwatch on executors (47 milliseconds) [info] - MultiStopwatch (45 milliseconds) [info] SVMClusterSuite: [info] - task size should be small in both training and prediction (5 seconds, 885 milliseconds) [info] RowMatrixClusterSuite: [info] - multinomial logistic regression with intercept without regularization with bound (24 seconds, 955 milliseconds) [info] - task size should be small in svd (6 seconds, 685 milliseconds) [info] - multinomial logistic regression without intercept without regularization (4 seconds, 656 milliseconds) [info] - task size should be small in summarize (342 milliseconds) [info] BucketizerSuite: [info] - params (11 milliseconds) [info] - Bucket continuous features, without -inf,inf (503 milliseconds) [info] - Bucket continuous features, with -inf,inf (337 milliseconds) [info] - Bucket continuous features, with NaN data but non-NaN splits (432 milliseconds) [info] - Bucketizer should only drop NaN in input columns, with handleInvalid=skip (89 milliseconds) [info] - Bucket continuous features, with NaN splits (2 milliseconds) [info] - Binary search correctness on hand-picked examples (3 milliseconds) [info] - Binary search correctness in contrast with linear search, on random data (3 milliseconds) [info] - read/write (362 milliseconds) [info] - Bucket numeric features (235 milliseconds) [info] - multiple columns: Bucket continuous features, without -inf,inf (127 milliseconds) [info] - multiple columns: Bucket continuous features, with -inf,inf (45 milliseconds) [info] - multiple columns: Bucket continuous features, with NaN data but non-NaN splits (126 milliseconds) [info] - multiple columns: Bucket continuous features, with NaN splits (2 milliseconds) [info] - multiple columns: read/write (357 milliseconds) [info] - Bucketizer in a pipeline (63 milliseconds) [info] - Compare single/multiple column(s) Bucketizer in pipeline (76 milliseconds) [info] - assert exception is thrown if both multi-column and single-column params are set (46 milliseconds) [info] SVMSuite: [info] - SVM with threshold (805 milliseconds) [info] - SVM using local random SGD (715 milliseconds) [info] - multinomial logistic regression without intercept without regularization with bound (4 seconds, 824 milliseconds) [info] - SVM local random SGD with initial weights (603 milliseconds) [info] - SVM with invalid labels (4 seconds, 589 milliseconds) [info] - model save/load (963 milliseconds) [info] BisectingKMeansSuite: [info] - default parameters (735 milliseconds) [info] - SPARK-16473: Verify Bisecting K-Means does not fail in edge case whereone cluster is empty after split (1 second, 327 milliseconds) [info] - setter/getter (2 milliseconds) [info] - fit, transform and summary (2 seconds, 905 milliseconds) [info] - read/write (2 seconds, 1 milliseconds) [info] - BisectingKMeans with cosine distance is not supported for 0-length vectors (134 milliseconds) [info] - BisectingKMeans with cosine distance (2 seconds, 494 milliseconds) [info] - Comparing with and without weightCol with cosine distance (4 seconds, 385 milliseconds) [info] - Comparing with and without weightCol (4 seconds, 143 milliseconds) [info] - BisectingKMeans with Array input (1 second, 660 milliseconds) [info] - prediction on single instance (1 second, 571 milliseconds) [info] LeastSquaresAggregatorSuite: [info] - aggregator add method input size (31 milliseconds) [info] - negative weight (23 milliseconds) [info] - check sizes (48 milliseconds) [info] - check correctness (187 milliseconds) [info] - check with zero standard deviation (41 milliseconds) [info] ParamGridBuilderSuite: [info] - param grid builder (5 milliseconds) [info] MLTestSuite: [info] - test transformer on stream data (1 second, 475 milliseconds) [info] RegressionMetricsSuite: [info] - regression metrics for unbiased (includes intercept term) predictor (25 milliseconds) [info] - regression metrics for biased (no intercept term) predictor (17 milliseconds) [info] - regression metrics with complete fitting (16 milliseconds) [info] - regression metrics with same (1.0) weight samples (17 milliseconds) [info] - regression metrics with weighted samples (15 milliseconds) [info] PowerIterationClusteringSuite: [info] - multinomial logistic regression with intercept with L1 regularization (35 seconds, 270 milliseconds) [info] - power iteration clustering (15 seconds, 601 milliseconds) [info] - multinomial logistic regression without intercept with L1 regularization (17 seconds, 147 milliseconds) [info] - power iteration clustering on graph (14 seconds, 79 milliseconds) [info] - normalize and powerIter (320 milliseconds) [info] - model save/load (614 milliseconds) [info] RandomForestSuite: [info] - multinomial logistic regression with intercept with L2 regularization (7 seconds, 998 milliseconds) [info] - Binary classification with continuous features: split calculation (108 milliseconds) [info] - Binary classification with binary (ordered) categorical features: split calculation (41 milliseconds) [info] - Binary classification with 3-ary (ordered) categorical features, with no samples for one category: split calculation (47 milliseconds) [info] - find splits for a continuous feature (44 milliseconds) [info] - train with empty arrays (24 milliseconds) [info] - train with constant features (290 milliseconds) [info] - Multiclass classification with unordered categorical features: split calculations (41 milliseconds) [info] - Multiclass classification with ordered categorical features: split calculations (55 milliseconds) [info] - extract categories from a number for multiclass classification (0 milliseconds) [info] - Avoid aggregation on the last level (139 milliseconds) [info] - Avoid aggregation if impurity is 0.0 (129 milliseconds) [info] - Use soft prediction for binary classification with ordered categorical features (95 milliseconds) [info] - Second level node building with vs. without groups (522 milliseconds) [info] - Binary classification with continuous features: subsampling features (1 second, 638 milliseconds) [info] - Binary classification with continuous features and node Id cache: subsampling features (1 second, 569 milliseconds) [info] - computeFeatureImportance, featureImportances (7 milliseconds) [info] - normalizeMapValues (1 millisecond) [info] - SPARK-3159 tree model redundancy - classification (357 milliseconds) [info] - multinomial logistic regression with intercept with L2 regularization with bound (5 seconds, 309 milliseconds) [info] - SPARK-3159 tree model redundancy - regression (355 milliseconds) [info] - weights at arbitrary scale (464 milliseconds) [info] - minWeightFraction and minInstancesPerNode (395 milliseconds) [info] PrefixSpanSuite: [info] - PrefixSpan internal (integer seq, 0 delim) run, singleton itemsets (286 milliseconds) [info] - PrefixSpan internal (integer seq, -1 delim) run, variable-size itemsets (73 milliseconds) [info] - PrefixSpan projections with multiple partial starts (191 milliseconds) [info] - PrefixSpan Integer type, variable-size itemsets (165 milliseconds) [info] - PrefixSpan String type, variable-size itemsets (168 milliseconds) [info] - PrefixSpan pre-processing's cleaning test (46 milliseconds) [info] - model save/load (795 milliseconds) [info] ParamsSuite: [info] - json encode/decode (18 milliseconds) [info] - param (3 milliseconds) [info] - param pair (1 millisecond) [info] - param map (2 milliseconds) [info] - params (3 milliseconds) [info] - ParamValidate (9 milliseconds) [info] - Params.copyValues (0 milliseconds) [info] - Filtering ParamMap (4 milliseconds) [info] AFTSurvivalRegressionSuite: [info] - export test data into CSV format !!! IGNORED !!! [info] - params (8 milliseconds) [info] - aft survival regression: default params (951 milliseconds) [info] - multinomial logistic regression without intercept with L2 regularization (4 seconds, 394 milliseconds) [info] - aft survival regression with univariate (1 second, 238 milliseconds) [info] - aft survival regression with multivariate (933 milliseconds) [info] - multinomial logistic regression without intercept with L2 regularization with bound (2 seconds, 373 milliseconds) [info] - aft survival regression w/o intercept (806 milliseconds) [info] - aft survival regression w/o quantiles column (799 milliseconds) [info] - should support all NumericType labels, and not support other types (1 second, 59 milliseconds) [info] - should support all NumericType censors, and not support other types (931 milliseconds) [info] - numerical stability of standardization (1 second, 504 milliseconds) [info] - read/write (1 second, 364 milliseconds) [info] - SPARK-15892: Incorrectly merged AFTAggregator with zero total count (2 seconds, 542 milliseconds) [info] - AFTSurvivalRegression on blocks (14 seconds, 493 milliseconds) [info] CrossValidatorSuite: [info] - cross validation with logistic regression (8 seconds, 5 milliseconds) [info] - multinomial logistic regression with intercept with elasticnet regularization (38 seconds, 751 milliseconds) [info] - cross validation with logistic regression with fold col (10 seconds, 22 milliseconds) [info] - cross validation with logistic regression with wrong fold col (6 milliseconds) [info] - cross validation with linear regression (9 seconds, 456 milliseconds) [info] - transformSchema should check estimatorParamMaps (3 milliseconds) [info] - multinomial logistic regression without intercept with elasticnet regularization (13 seconds, 223 milliseconds) [info] - evaluate on test set (1 second, 909 milliseconds) [info] - evaluate with labels that are not doubles (1 second, 158 milliseconds) [info] - statistics on training data (811 milliseconds) [info] - cross validation with parallel evaluation (6 seconds, 406 milliseconds) [info] - read/write: CrossValidator with simple estimator (984 milliseconds) [info] - logistic regression with sample weights (15 seconds, 458 milliseconds) [info] - CrossValidator expose sub models (15 seconds, 353 milliseconds) [info] - set family (2 seconds, 129 milliseconds) [info] - read/write: CrossValidator with nested estimator (2 seconds, 194 milliseconds) [info] - read/write: Persistence of nested estimator works if parent directory changes (1 second, 165 milliseconds) [info] - set initial model (3 seconds, 763 milliseconds) [info] - binary logistic regression with all labels the same (886 milliseconds) [info] - read/write: CrossValidator with complex estimator (2 seconds, 501 milliseconds) [info] - read/write: CrossValidator fails for extraneous Param (3 milliseconds) [info] - multiclass logistic regression with all labels the same (1 second, 594 milliseconds) [info] - compressed storage for constant label (382 milliseconds) [info] - read/write: CrossValidatorModel (2 seconds, 30 milliseconds) [info] FunctionsSuite: [info] - test vector_to_array (223 milliseconds) [info] - test array_to_vector (121 milliseconds) [info] ChiSquareTestSuite: [info] - test DataFrame of labeled points (1 second, 224 milliseconds) [info] - compressed coefficients (3 seconds, 212 milliseconds) [info] - numClasses specified in metadata/inferred (589 milliseconds) [info] - test DataFrame of sparse points (2 seconds, 252 milliseconds) [info] - large number of features (SPARK-3087) (445 milliseconds) [info] - read/write (4 seconds, 711 milliseconds) [info] - should support all NumericType labels and weights, and not support other types (2 seconds, 36 milliseconds) [info] - fail on continuous features or labels (5 seconds, 888 milliseconds) [info] LogisticRegressionSuite: [info] - logistic regression with SGD (789 milliseconds) [info] - string params should be case-insensitive (2 seconds, 155 milliseconds) [info] - toString (1 millisecond) [info] - logistic regression with LBFGS (1 second, 675 milliseconds) [info] - logistic regression with initial weights with SGD (725 milliseconds) [info] - logistic regression with initial weights and non-default regularization parameter (456 milliseconds) [info] - logistic regression with initial weights with LBFGS (696 milliseconds) [info] - numerical stability of scaling features using logistic regression with LBFGS (3 seconds, 692 milliseconds) [info] - multinomial logistic regression with LBFGS (16 seconds, 943 milliseconds) [info] - model save/load: binary classification (882 milliseconds) [info] - model save/load: multiclass classification (422 milliseconds) [info] - binary logistic regression with intercept without regularization (3 seconds, 271 milliseconds) [info] - binary logistic regression without intercept without regularization (3 seconds, 586 milliseconds) [info] - binary logistic regression with intercept with L1 regularization (7 seconds, 742 milliseconds) [info] - binary logistic regression without intercept with L1 regularization (5 seconds, 336 milliseconds) [info] - binary logistic regression with intercept with L2 regularization (2 seconds, 924 milliseconds) [info] - binary logistic regression without intercept with L2 regularization (1 second, 687 milliseconds) [info] OneHotEncoderSuite: [info] - params (3 milliseconds) [info] - OneHotEncoder dropLast = false (593 milliseconds) [info] - Single Column: OneHotEncoder dropLast = false (430 milliseconds) [info] - OneHotEncoder dropLast = true (509 milliseconds) [info] - input column with ML attribute (394 milliseconds) [info] - Single Column: input column with ML attribute (321 milliseconds) [info] - input column without ML attribute (355 milliseconds) [info] - read/write (270 milliseconds) [info] - Single Column: read/write (292 milliseconds) [info] - OneHotEncoderModel read/write (918 milliseconds) [info] - OneHotEncoder with varying types (3 seconds, 688 milliseconds) [info] - Single Column: OneHotEncoder with varying types (3 seconds, 28 milliseconds) [info] - OneHotEncoder: encoding multiple columns and dropLast = false (578 milliseconds) [info] - Single Column: OneHotEncoder: encoding multiple columns and dropLast = false (945 milliseconds) [info] - OneHotEncoder: encoding multiple columns and dropLast = true (480 milliseconds) [info] - Throw error on invalid values (355 milliseconds) [info] - Can't transform on negative input (359 milliseconds) [info] - Keep on invalid values: dropLast = false (429 milliseconds) [info] - Keep on invalid values: dropLast = true (412 milliseconds) [info] - OneHotEncoderModel changes dropLast (929 milliseconds) [info] - OneHotEncoderModel changes handleInvalid (731 milliseconds) [info] - Transforming on mismatched attributes (41 milliseconds) [info] - assert exception is thrown if both multi-column and single-column params are set (15 milliseconds) [info] - Compare single/multiple column(s) OneHotEncoder in pipeline (227 milliseconds) [info] RobustScalerSuite: [info] - params (7 milliseconds) [info] - Scaling with default parameter (487 milliseconds) [info] - Scaling with setter (1 second, 146 milliseconds) [info] - sparse data and withCentering (399 milliseconds) [info] - deal with NaN values (381 milliseconds) [info] - deal with high-dim dataset (616 milliseconds) [info] - RobustScaler read/write (290 milliseconds) [info] - RobustScalerModel read/write (926 milliseconds) [info] VectorIndexerSuite: [info] - params (6 milliseconds) [info] - Cannot fit an empty DataFrame (20 milliseconds) [info] - Throws error when given RDDs with different size vectors (1 second, 421 milliseconds) [info] - Same result with dense and sparse vectors (245 milliseconds) [info] - Builds valid categorical feature value index, transform correctly, check metadata (1 second, 479 milliseconds) [info] - handle invalid (2 seconds, 811 milliseconds) [info] - Maintain sparsity for sparse vectors (723 milliseconds) [info] - Preserve metadata (364 milliseconds) [info] - VectorIndexer read/write (289 milliseconds) [info] - VectorIndexerModel read/write (941 milliseconds) [info] BucketedRandomProjectionLSHSuite: [info] - params (7 milliseconds) [info] - setters (0 milliseconds) [info] - BucketedRandomProjectionLSH: default params (1 millisecond) [info] - read/write (1 second, 197 milliseconds) [info] - hashFunction (2 milliseconds) [info] - keyDistance (0 milliseconds) [info] - BucketedRandomProjectionLSH: randUnitVectors (20 milliseconds) [info] - BucketedRandomProjectionLSH: streaming transform (415 milliseconds) [info] - BucketedRandomProjectionLSH: test of LSH property (2 seconds, 108 milliseconds) [info] - BucketedRandomProjectionLSH with high dimension data: test of LSH property (6 seconds, 943 milliseconds) [info] - approxNearestNeighbors for bucketed random projection (630 milliseconds) [info] - approxNearestNeighbors with multiple probing (919 milliseconds) [info] - approxNearestNeighbors for numNeighbors <= 0 (2 milliseconds) [info] - approxSimilarityJoin for bucketed random projection on different dataset (1 second, 416 milliseconds) [info] - approxSimilarityJoin for self join (1 second, 225 milliseconds) [info] HashingTFSuite: [info] - hashing tf on a single doc (10 milliseconds) [info] - hashing tf on an RDD (24 milliseconds) [info] - applying binary term freqs (1 millisecond) [info] LinearRegressionClusterSuite: [info] - task size should be small in both training and prediction (5 seconds, 658 milliseconds) [info] BreezeMatrixConversionSuite: [info] - dense matrix to breeze (1 millisecond) [info] - dense breeze matrix to matrix (0 milliseconds) [info] - sparse matrix to breeze (1 millisecond) [info] - sparse breeze matrix to sparse matrix (0 milliseconds) [info] GBTClassifierSuite: [info] - params (12 milliseconds) [info] - GBTClassifier: default params (4 seconds, 360 milliseconds) [info] - setThreshold, getThreshold (2 milliseconds) [info] - thresholds prediction (10 seconds, 577 milliseconds) [info] - GBTClassifier: Predictor, Classifier methods (8 seconds, 780 milliseconds) [info] - prediction on single instance (4 seconds, 154 milliseconds) [info] - GBT parameter stepSize should be in interval (0, 1] (2 milliseconds) [info] - Binary classification with continuous features: Log Loss (8 seconds, 955 milliseconds) [info] - Checkpointing (826 milliseconds) [info] - model support predict leaf index (92 milliseconds) [info] - should support all NumericType labels and not support other types (9 seconds, 85 milliseconds) [info] - Fitting without numClasses in metadata (228 milliseconds) [info] - extractLabeledPoints with bad data (284 milliseconds) [info] - Feature importance with toy data (466 milliseconds) [info] - Tests of feature subset strategy (1 second, 11 milliseconds) [info] - model evaluateEachIteration (798 milliseconds) [info] - runWithValidation stops early and performs better on a validation dataset (3 seconds, 347 milliseconds) [info] - tree params (2 seconds, 683 milliseconds) [info] - training with sample weights (36 seconds, 507 milliseconds) [info] - model save/load (4 seconds, 796 milliseconds) [info] - SPARK-33398: Load GBTClassificationModel prior to Spark 3.0 (548 milliseconds) [info] FValueTestSuite: [info] - test DataFrame of labeled points (1 second, 709 milliseconds) [info] - test DataFrame with sparse vector (301 milliseconds) [info] RankingEvaluatorSuite: [info] - params (3 milliseconds) [info] - read/write (289 milliseconds) [info] - evaluation metrics (129 milliseconds) [info] - getMetrics (305 milliseconds) [info] NaiveBayesClusterSuite: [info] - task size should be small in both training and prediction (6 seconds, 801 milliseconds) [info] CoordinateMatrixSuite: [info] - size (29 milliseconds) [info] - empty entries (21 milliseconds) [info] - toBreeze (14 milliseconds) [info] - transpose (32 milliseconds) [info] - toIndexedRowMatrix (49 milliseconds) [info] - toRowMatrix (47 milliseconds) [info] - toBlockMatrix (80 milliseconds) [info] TrainValidationSplitSuite: [info] - train validation with logistic regression (3 seconds, 597 milliseconds) [info] - train validation with linear regression (4 seconds, 12 milliseconds) [info] - transformSchema should check estimatorParamMaps (3 milliseconds) [info] - train validation with parallel evaluation (3 seconds, 743 milliseconds) [info] - read/write: TrainValidationSplit (1 second, 5 milliseconds) [info] - TrainValidationSplit expose sub models (6 seconds, 686 milliseconds) [info] - read/write: TrainValidationSplit with nested estimator (2 seconds, 213 milliseconds) [info] - read/write: Persistence of nested estimator works if parent directory changes (1 second, 63 milliseconds) [info] - read/write: TrainValidationSplitModel (2 seconds, 45 milliseconds) [info] IsotonicRegressionSuite: [info] - increasing isotonic regression (59 milliseconds) [info] - model save/load (436 milliseconds) [info] - isotonic regression with size 0 (42 milliseconds) [info] - isotonic regression with size 1 (45 milliseconds) [info] - isotonic regression strictly increasing sequence (46 milliseconds) [info] - isotonic regression strictly decreasing sequence (44 milliseconds) [info] - isotonic regression with last element violating monotonicity (47 milliseconds) [info] - isotonic regression with first element violating monotonicity (46 milliseconds) [info] - isotonic regression with negative labels (48 milliseconds) [info] - isotonic regression with unordered input (47 milliseconds) [info] - weighted isotonic regression (44 milliseconds) [info] - weighted isotonic regression with weights lower than 1 (49 milliseconds) [info] - weighted isotonic regression with negative weights (51 milliseconds) [info] - weighted isotonic regression with zero weights (45 milliseconds) [info] - SPARK-16426 isotonic regression with duplicate features that produce NaNs (44 milliseconds) [info] - isotonic regression prediction (47 milliseconds) [info] - isotonic regression prediction with duplicate features (46 milliseconds) [info] - antitonic regression prediction with duplicate features (47 milliseconds) [info] - isotonic regression RDD prediction (85 milliseconds) [info] - antitonic regression prediction (51 milliseconds) [info] - model construction (3 milliseconds) [info] SharedParamsSuite: [info] - outputCol (3 milliseconds) [info] ALSSuite: [info] - LocalIndexEncoder (4 milliseconds) [info] - normal equation construction (4 milliseconds) [info] - CholeskySolver (9 milliseconds) [info] - RatingBlockBuilder (4 milliseconds) [info] - UncompressedInBlock (14 milliseconds) [info] - CheckedCast (548 milliseconds) [info] - exact rank-1 matrix (5 seconds, 116 milliseconds) [info] - approximate rank-1 matrix (4 seconds, 830 milliseconds) [info] - approximate rank-2 matrix (4 seconds, 988 milliseconds) [info] - different block settings (8 seconds, 449 milliseconds) [info] - more blocks than ratings (2 seconds, 198 milliseconds) [info] - implicit feedback (2 seconds, 770 milliseconds) [info] - implicit feedback regression (3 seconds, 878 milliseconds) [info] - using generic ID types (1 second, 981 milliseconds) [info] - nonnegative constraint (929 milliseconds) [info] - als partitioner is a projection (5 milliseconds) [info] - partitioner in returned factors (574 milliseconds) [info] - als with large number of iterations (8 seconds, 891 milliseconds) [info] - read/write (2 seconds, 94 milliseconds) [info] - input type validation (45 seconds, 552 milliseconds) [info] - SPARK-18268: ALS with empty RDD should fail with better message (25 milliseconds) [info] - ALS cold start user/item prediction strategy (2 seconds, 997 milliseconds) [info] - case insensitive cold start param value (7 seconds, 471 milliseconds) [info] - recommendForAllUsers with k <, = and > num_items (3 seconds, 22 milliseconds) [info] - recommendForAllItems with k <, = and > num_users (2 seconds, 710 milliseconds) [info] - recommendForUserSubset with k <, = and > num_items (3 seconds, 77 milliseconds) [info] - recommendForItemSubset with k <, = and > num_users (2 seconds, 716 milliseconds) [info] - subset recommendations eliminate duplicate ids, returns same results as unique ids (2 seconds, 404 milliseconds) [info] - subset recommendations on full input dataset equivalent to recommendForAll (1 second, 572 milliseconds) [info] - ALS should not introduce unnecessary shuffle (720 milliseconds) [info] DistanceMeasureSuite: [info] - predict with statistics (13 milliseconds) [info] - compute statistics distributedly (75 milliseconds) [info] Word2VecSuite: [info] - Word2Vec (185 milliseconds) [info] - Word2Vec throws exception when vocabulary is empty (31 milliseconds) [info] - Word2VecModel (1 millisecond) [info] - findSynonyms doesn't reject similar word vectors when called with a vector (0 milliseconds) [info] - model load / save (506 milliseconds) [info] - big model load / save (419 milliseconds) [info] - test similarity for word vectors with large values is not Infinity or NaN (2 milliseconds) [info] MulticlassMetricsSuite: [info] - Multiclass evaluation metrics (80 milliseconds) [info] - Multiclass evaluation metrics with weights (23 milliseconds) [info] - MulticlassMetrics supports binary class log-loss (61 milliseconds) [info] - MulticlassMetrics supports multi-class log-loss (57 milliseconds) [info] - MulticlassMetrics supports hammingLoss (53 milliseconds) [info] GaussianMixtureSuite: [info] - gmm fails on high dimensional data (49 milliseconds) [info] - default parameters (701 milliseconds) [info] - set parameters (0 milliseconds) [info] - parameters validation (1 millisecond) [info] - fit, transform and summary (966 milliseconds) [info] - read/write (1 second, 672 milliseconds) [info] - univariate dense/sparse data with two clusters (1 second, 344 milliseconds) [info] - multivariate data and check against R mvnormalmixEM (503 milliseconds) [info] - upper triangular matrix unpacking (0 milliseconds) [info] - GaussianMixture with Array input (1 second, 539 milliseconds) [info] - GMM support instance weighting (6 seconds, 693 milliseconds) [info] - prediction on single instance (608 milliseconds) [info] MultilayerPerceptronClassifierSuite: [info] - Input Validation (7 milliseconds) [info] - XOR function learning as binary classification problem with two outputs. (1 second, 458 milliseconds) [info] - prediction on single instance (1 second, 18 milliseconds) [info] - Predicted class probabilities: calibration on toy dataset (4 seconds, 777 milliseconds) [info] - test model probability (1 second, 337 milliseconds) [info] - Test setWeights by training restart (505 milliseconds) [info] - 3 class classification with 2 hidden layers (6 seconds, 141 milliseconds) [info] - read/write: MultilayerPerceptronClassifier (277 milliseconds) [info] - read/write: MultilayerPerceptronClassificationModel (1 second, 174 milliseconds) [info] - should support all NumericType labels and not support other types (1 second, 880 milliseconds) [info] - Load MultilayerPerceptronClassificationModel prior to Spark 3.0 (326 milliseconds) [info] - summary and training summary (355 milliseconds) [info] - MultilayerPerceptron training summary totalIterations (2 seconds, 666 milliseconds) [info] VarianceThresholdSelectorSuite: [info] - params (2 milliseconds) [info] - Test VarianceThresholdSelector: varianceThreshold not set (506 milliseconds) [info] - Test VarianceThresholdSelector: set varianceThreshold (382 milliseconds) [info] - Test VarianceThresholdSelector: sparse vector (397 milliseconds) [info] - read/write (1 second, 257 milliseconds) [info] JsonMatrixConverterSuite: [info] - toJson/fromJson (19 milliseconds) [info] DefaultReadWriteSuite: [info] - default read/write (303 milliseconds) [info] - default param shouldn't become user-supplied param after persistence (285 milliseconds) [info] - User-supplied value for default param should be kept after persistence (282 milliseconds) [info] - Read metadata without default field prior to 2.4 (1 millisecond) [info] - Should raise error when read metadata without default field after Spark 2.4 (3 milliseconds) [info] LassoClusterSuite: [info] - task size should be small in both training and prediction (5 seconds, 559 milliseconds) [info] AttributeSuite: [info] - default numeric attribute (1 millisecond) [info] - customized numeric attribute (2 milliseconds) [info] - bad numeric attributes (3 milliseconds) [info] - default nominal attribute (1 millisecond) [info] - customized nominal attribute (2 milliseconds) [info] - bad nominal attributes (2 milliseconds) [info] - default binary attribute (1 millisecond) [info] - customized binary attribute (0 milliseconds) [info] - bad binary attributes (1 millisecond) [info] - attribute from struct field (1 millisecond) [info] - Kryo class register (7 milliseconds) [info] StreamingTestSuite: [info] - accuracy for null hypothesis using welch t-test (388 milliseconds) [info] - accuracy for alternative hypothesis using welch t-test (350 milliseconds) [info] - accuracy for null hypothesis using student t-test (351 milliseconds) [info] - accuracy for alternative hypothesis using student t-test (344 milliseconds) [info] - batches within same test window are grouped (397 milliseconds) [info] - entries in peace period are dropped (337 milliseconds) [info] - null hypothesis when only data from one group is present (347 milliseconds) [info] ANOVATestSuite: [info] - test DataFrame of labeled points (1 second, 21 milliseconds) [info] - test DataFrame with sparse vector (699 milliseconds) [info] GeneralizedLinearPMMLModelExportSuite: [info] - linear regression PMML export (33 milliseconds) [info] - ridge regression PMML export (0 milliseconds) [info] - lasso PMML export (0 milliseconds) [info] KolmogorovSmirnovTestSuite: [info] - 1 sample Kolmogorov-Smirnov test: apache commons math3 implementation equivalence (2 seconds, 312 milliseconds) [info] - 1 sample Kolmogorov-Smirnov test: R implementation equivalence (183 milliseconds) [info] LinearRegressionSuite: [info] - export test data into CSV format !!! IGNORED !!! [info] - params (1 millisecond) [info] - linear regression: default params (518 milliseconds) [info] - linear regression: can transform data with LinearRegressionModel (445 milliseconds) [info] - linear regression: illegal params (3 milliseconds) [info] - linear regression handles singular matrices (1 second, 161 milliseconds) [info] - linear regression with intercept without regularization (3 seconds, 742 milliseconds) [info] - linear regression without intercept without regularization (4 seconds, 844 milliseconds) [info] - linear regression with intercept with L1 regularization (4 seconds, 60 milliseconds) [info] - linear regression without intercept with L1 regularization (4 seconds, 249 milliseconds) [info] - linear regression with intercept with L2 regularization (3 seconds, 988 milliseconds) [info] - linear regression without intercept with L2 regularization (4 seconds, 499 milliseconds) [info] - linear regression with intercept with ElasticNet regularization (4 seconds, 16 milliseconds) [info] - linear regression without intercept with ElasticNet regularization (4 seconds, 383 milliseconds) [info] - prediction on single instance (388 milliseconds) [info] - LinearRegression on blocks (1 minute, 20 seconds) [info] - linear regression model with constant label (2 seconds, 819 milliseconds) [info] - regularized linear regression through origin with constant label (818 milliseconds) [info] - linear regression with l-bfgs when training is not needed (2 seconds, 242 milliseconds) [info] - linear regression model training summary (2 seconds, 946 milliseconds) [info] - linear regression model testset evaluation summary (1 second, 838 milliseconds) [info] - linear regression model training summary with weighted samples (8 seconds, 843 milliseconds) [info] - linear regression model testset evaluation summary with weighted samples (11 seconds, 160 milliseconds) [info] - linear regression training summary totalIterations (2 seconds, 429 milliseconds) [info] - linear regression with weighted samples (31 seconds, 775 milliseconds) [info] - linear regression model with l-bfgs with big feature datasets (1 second, 131 milliseconds) [info] - linear regression summary with weighted samples and intercept by normal solver (663 milliseconds) [info] - linear regression summary with weighted samples and w/o intercept by normal solver (399 milliseconds) [info] - read/write (1 second, 515 milliseconds) [info] - pmml export (1 second, 451 milliseconds) [info] - should support all NumericType labels and weights, and not support other types (3 seconds, 120 milliseconds) [info] - linear regression (huber loss) with intercept without regularization (1 second, 704 milliseconds) [info] - linear regression (huber loss) without intercept without regularization (1 second, 69 milliseconds) [info] - linear regression (huber loss) with intercept with L2 regularization (1 second, 141 milliseconds) [info] - linear regression (huber loss) without intercept with L2 regularization (951 milliseconds) [info] - huber loss model match squared error for large epsilon (1 second, 44 milliseconds) [info] ALSStorageSuite: [info] - invalid storage params (7 milliseconds) [info] - default and non-default storage params set correct RDD StorageLevels (1 second, 165 milliseconds) [info] PredictorSuite: [info] - should support all NumericType labels and weights, and not support other types (76 milliseconds) [info] HypothesisTestSuite: [info] - chi squared pearson goodness of fit (9 milliseconds) [info] - chi squared pearson matrix independence (3 milliseconds) [info] - chi squared pearson RDD[LabeledPoint] (2 seconds, 57 milliseconds) [info] - 1 sample Kolmogorov-Smirnov test: apache commons math3 implementation equivalence (2 seconds, 599 milliseconds) [info] - 1 sample Kolmogorov-Smirnov test: R implementation equivalence (58 milliseconds) [info] RWrapperUtilsSuite: [info] - avoid libsvm data column name conflicting (236 milliseconds) [info] ProbabilisticClassifierSuite: [info] - test thresholding (2 milliseconds) [info] - test thresholding not required (0 milliseconds) [info] - test tiebreak (1 millisecond) [info] - test one zero threshold (0 milliseconds) [info] - bad thresholds (2 milliseconds) [info] - normalizeToProbabilitiesInPlace (1 millisecond) [info] VectorSizeHintSuite: [info] - Test Param Validators (3 milliseconds) [info] - Required params must be set before transform. (64 milliseconds) [info] - Adding size to column of vectors. (994 milliseconds) [info] - Size hint preserves attributes. (1 second, 52 milliseconds) [info] - Size mismatch between current and target size raises an error. (82 milliseconds) [info] - Handle invalid does the right thing. (1 second, 658 milliseconds) [info] - read/write (304 milliseconds) [info] VectorsSuite: [info] - kryo class register (6 milliseconds) [info] - dense vector construction with varargs (0 milliseconds) [info] - dense vector construction from a double array (0 milliseconds) [info] - sparse vector construction (0 milliseconds) [info] - sparse vector construction with unordered elements (0 milliseconds) [info] - sparse vector construction with mismatched indices/values array (1 millisecond) [info] - sparse vector construction with too many indices vs size (1 millisecond) [info] - dense to array (0 milliseconds) [info] - dense argmax (0 milliseconds) [info] - sparse to array (0 milliseconds) [info] - sparse argmax (0 milliseconds) [info] - vector equals (2 milliseconds) [info] - vectors equals with explicit 0 (2 milliseconds) [info] - indexing dense vectors (0 milliseconds) [info] - indexing sparse vectors (1 millisecond) [info] - parse vectors (2 milliseconds) [info] - zeros (0 milliseconds) [info] - Vector.copy (0 milliseconds) [info] - VectorUDT (1 millisecond) [info] - fromBreeze (1 millisecond) [info] - sqdist (9 milliseconds) [info] - foreach (7 milliseconds) [info] - foreachActive (2 milliseconds) [info] - foreachNonZero (1 millisecond) [info] - vector p-norm (4 milliseconds) [info] - Vector numActive and numNonzeros (1 millisecond) [info] - Vector toSparse and toDense (1 millisecond) [info] - Vector.compressed (1 millisecond) [info] - SparseVector.slice (1 millisecond) [info] - toJson/fromJson (5 milliseconds) [info] - conversions between new local linalg and mllib linalg (0 milliseconds) [info] - implicit conversions between new local linalg and mllib linalg (0 milliseconds) [info] - sparse vector only support non-negative length (2 milliseconds) [info] - dot product only supports vectors of same size (0 milliseconds) [info] - dense vector dot product (0 milliseconds) [info] - sparse vector dot product (0 milliseconds) [info] - mixed sparse and dense vector dot product (0 milliseconds) [info] - iterator (1 millisecond) [info] - activeIterator (2 milliseconds) [info] - nonZeroIterator (1 millisecond) [info] AssociationRulesSuite: [info] - association rules using String type (96 milliseconds) [info] HashingTFSuite: [info] - params (1 millisecond) [info] - hashingTF (423 milliseconds) [info] - applying binary term freqs (46 milliseconds) [info] - indexOf method (0 milliseconds) [info] - SPARK-23469: Load HashingTF prior to Spark 3.0 (156 milliseconds) [info] - read/write (283 milliseconds) [info] LabeledPointSuite: [info] - Kryo class register (9 milliseconds) [info] NaiveBayesSuite: [info] - model types (0 milliseconds) [info] - params (1 millisecond) [info] - naive bayes: default params (0 milliseconds) [info] - Naive Bayes Multinomial (5 seconds, 200 milliseconds) [info] - prediction on single instance (483 milliseconds) [info] - Naive Bayes with weighted samples (4 seconds, 858 milliseconds) [info] - Naive Bayes Bernoulli (13 seconds, 533 milliseconds) [info] - detect negative values (216 milliseconds) [info] - detect non zero or one values in Bernoulli (196 milliseconds) [info] - Naive Bayes Gaussian (1 second, 403 milliseconds) [info] - Naive Bayes Gaussian - Model Coefficients (235 milliseconds) [info] - Naive Bayes Complement (270 milliseconds) [info] - read/write (2 seconds, 664 milliseconds) [info] - should support all NumericType labels and weights, and not support other types (1 second, 125 milliseconds) [info] DifferentiableLossAggregatorSuite: [info] - empty aggregator (5 milliseconds) [info] - aggregator initialization (3 milliseconds) [info] - merge aggregators (3 milliseconds) [info] - loss, gradient, weight (3 milliseconds) [info] IndexedRowMatrixSuite: [info] - size (47 milliseconds) [info] - empty rows (24 milliseconds) [info] - toBreeze (37 milliseconds) [info] - toRowMatrix (40 milliseconds) [info] - toCoordinateMatrix (52 milliseconds) [info] - toBlockMatrix dense backing (172 milliseconds) [info] - toBlockMatrix sparse backing (159 milliseconds) [info] - toBlockMatrix mixed backing (127 milliseconds) [info] - multiply a local matrix (86 milliseconds) [info] - gram (32 milliseconds) [info] - svd (99 milliseconds) [info] - validate matrix sizes of svd (52 milliseconds) [info] - validate k in svd (12 milliseconds) [info] - similar columns (102 milliseconds) [info] DecisionTreeSuite: [info] - Binary classification stump with ordered categorical features (131 milliseconds) [info] - Regression stump with 3-ary (ordered) categorical features (133 milliseconds) [info] - Regression stump with binary (ordered) categorical features (133 milliseconds) [info] - Binary classification stump with fixed label 0 for Gini (174 milliseconds) [info] - Binary classification stump with fixed label 1 for Gini (171 milliseconds) [info] - Binary classification stump with fixed label 0 for Entropy (172 milliseconds) [info] - Binary classification stump with fixed label 1 for Entropy (172 milliseconds) [info] - Multiclass classification stump with 3-ary (unordered) categorical features (165 milliseconds) [info] - Binary classification stump with 1 continuous feature, to check off-by-1 error (118 milliseconds) [info] - Binary classification stump with 2 continuous features (108 milliseconds) [info] - Multiclass classification stump with unordered categorical features, with just enough bins (174 milliseconds) [info] - Multiclass classification stump with continuous features (279 milliseconds) [info] - Multiclass classification stump with continuous + unordered categorical features (340 milliseconds) [info] - Multiclass classification stump with 10-ary (ordered) categorical features (211 milliseconds) [info] - Multiclass classification tree with 10-ary (ordered) categorical features, with just enough bins (168 milliseconds) [info] - split must satisfy min instances per node requirements (131 milliseconds) [info] - do not choose split that does not satisfy min instance per node requirements (110 milliseconds) [info] - split must satisfy min info gain requirements (128 milliseconds) [info] - Node.subtreeIterator (5 milliseconds) [info] - model save/load (1 second, 90 milliseconds) [info] Word2VecSuite: [info] - params (11 milliseconds) [info] - Word2Vec (827 milliseconds) [info] - getVectors (362 milliseconds) [info] - findSynonyms (211 milliseconds) [info] - window size (478 milliseconds) [info] - Word2Vec read/write numPartitions calculation (2 milliseconds) [info] - Word2Vec read/write (282 milliseconds) [info] - Word2VecModel read/write (1 second, 55 milliseconds) [info] - Word2Vec works with input that is non-nullable (NGram) (719 milliseconds) [info] MLEventsSuite: [info] - pipeline fit events (155 milliseconds) [info] - pipeline model transform events (4 milliseconds) [info] - pipeline read/write events (373 milliseconds) [info] - pipeline model read/write events (330 milliseconds) [info] MinHashLSHSuite: [info] - params (4 milliseconds) [info] - setters (0 milliseconds) [info] - MinHashLSH: default params (0 milliseconds) [info] - read/write (1 second, 180 milliseconds) [info] - Model copy and uid checks (18 milliseconds) [info] - hashFunction (4 milliseconds) [info] - hashFunction: empty vector (1 millisecond) [info] - keyDistance (2 milliseconds) [info] - MinHashLSH: test of LSH property (965 milliseconds) [info] - MinHashLSH: test of inputDim > prime (21 milliseconds) [info] - approxNearestNeighbors for min hash (547 milliseconds) [info] - approxNearestNeighbors for numNeighbors <= 0 (3 milliseconds) [info] - approxSimilarityJoin for min hash on different dataset (1 second, 171 milliseconds) [info] - MinHashLSHModel.transform should work with Structured Streaming (443 milliseconds) [info] PythonMLLibAPISuite: [info] - pickle vector (16 milliseconds) [info] - pickle labeled point (2 milliseconds) [info] - pickle double (1 millisecond) [info] - pickle matrix (3 milliseconds) [info] - pickle rating (3 milliseconds) [info] LabeledPointSuite: [info] - parse labeled points (1 millisecond) [info] - parse labeled points with whitespaces (0 milliseconds) [info] - parse labeled points with v0.9 format (1 millisecond) [info] - conversions between new ml LabeledPoint and mllib LabeledPoint (1 millisecond) [info] - Kryo class register (7 milliseconds) [info] FPGrowthSuite: [info] - FPGrowth fit and transform with different data types (3 seconds, 795 milliseconds) [info] - FPGrowth associationRules (224 milliseconds) [info] - FPGrowth getFreqItems (447 milliseconds) [info] - FPGrowth getFreqItems with Null (334 milliseconds) [info] - FPGrowth prediction should not contain duplicates (263 milliseconds) [info] - FPGrowthModel setMinConfidence should affect rules generation and transform (638 milliseconds) [info] - FPGrowth parameter check (94 milliseconds) [info] - read/write (1 second, 885 milliseconds) [info] RandomRDDsSuite: [info] - RandomRDD sizes (154 milliseconds) [info] - randomRDD for different distributions (3 seconds, 165 milliseconds) [info] - randomVectorRDD for different distributions (2 seconds, 428 milliseconds) [info] AreaUnderCurveSuite: [info] - auc computation (32 milliseconds) [info] - auc of an empty curve (18 milliseconds) [info] - auc of a curve with a single point (15 milliseconds) [info] RDDLossFunctionSuite: [info] - regularization (64 milliseconds) [info] - empty RDD (24 milliseconds) [info] - versus aggregating on an iterable (30 milliseconds) [info] RFormulaParserSuite: [info] - parse simple formulas (8 milliseconds) [info] - parse dot (5 milliseconds) [info] - parse deletion (4 milliseconds) [info] - parse additions and deletions in order (2 milliseconds) [info] - dot ignores complex column types (4 milliseconds) [info] - parse intercept (7 milliseconds) [info] - parse interactions (13 milliseconds) [info] - parse factor cross (7 milliseconds) [info] - interaction distributive (4 milliseconds) [info] - factor cross distributive (2 milliseconds) [info] - parse power (12 milliseconds) [info] - operator precedence (3 milliseconds) [info] - parse basic interactions with dot (3 milliseconds) [info] - parse all to all iris interactions (1 millisecond) [info] - parse interaction negation with iris (5 milliseconds) [info] ImputerSuite: [info] - Imputer for Double with default missing Value NaN (1 second, 141 milliseconds) [info] - Single Column: Imputer for Double with default missing Value NaN (1 second, 378 milliseconds) [info] - Imputer should handle NaNs when computing surrogate value, if missingValue is not NaN (616 milliseconds) [info] - Single Column: Imputer should handle NaNs when computing surrogate value, if missingValue is not NaN (574 milliseconds) [info] - Imputer for Float with missing Value -1.0 (615 milliseconds) [info] - Single Column: Imputer for Float with missing Value -1.0 (586 milliseconds) [info] - Imputer should impute null as well as 'missingValue' (651 milliseconds) [info] - Single Column: Imputer should impute null as well as 'missingValue' (606 milliseconds) [info] - Imputer should work with Structured Streaming (449 milliseconds) [info] - Single Column: Imputer should work with Structured Streaming (408 milliseconds) [info] - Imputer throws exception when surrogate cannot be computed (118 milliseconds) [info] - Single Column: Imputer throws exception when surrogate cannot be computed (312 milliseconds) [info] - Imputer input & output column validation (12 milliseconds) [info] - Imputer read/write (288 milliseconds) [info] - Single Column: Imputer read/write (301 milliseconds) [info] - ImputerModel read/write (839 milliseconds) [info] - Single Column: ImputerModel read/write (776 milliseconds) [info] - Imputer for IntegerType with default missing value null (1 second, 289 milliseconds) [info] - Single Column Imputer for IntegerType with default missing value null (1 second, 268 milliseconds) [info] - Imputer for IntegerType with missing value -1 (1 second, 239 milliseconds) [info] - Single Column: Imputer for IntegerType with missing value -1 (1 second, 244 milliseconds) [info] - assert exception is thrown if both multi-column and single-column params are set (19 milliseconds) [info] - Compare single/multiple column(s) Imputer in pipeline (1 second, 877 milliseconds) [info] SQLTransformerSuite: [info] - params (2 milliseconds) [info] - transform numeric data (590 milliseconds) [info] - read/write (267 milliseconds) [info] - transformSchema (19 milliseconds) [info] - SPARK-22538: SQLTransformer should not unpersist given dataset (560 milliseconds) [info] MinMaxScalerSuite: [info] - MinMaxScaler fit basic case (574 milliseconds) [info] - MinMaxScaler arguments max must be larger than min (24 milliseconds) [info] - MinMaxScaler read/write (294 milliseconds) [info] - MinMaxScalerModel read/write (892 milliseconds) [info] - MinMaxScaler should remain NaN value (112 milliseconds) [info] BlockMatrixSuite: [info] - size (0 milliseconds) [info] - grid partitioner (12 milliseconds) [info] - toCoordinateMatrix (31 milliseconds) [info] - toIndexedRowMatrix (197 milliseconds) [info] - toBreeze and toLocalMatrix (29 milliseconds) [info] - add (131 milliseconds) [info] - subtract (131 milliseconds) [info] - multiply (922 milliseconds) [info] - simulate multiply (48 milliseconds) [info] - validate (376 milliseconds) [info] - transpose (76 milliseconds) [info] ImageFileFormatSuite: [info] - Smoke test: create basic ImageSchema dataframe (116 milliseconds) [info] - image datasource count test (438 milliseconds) [info] - image datasource test: read jpg image (84 milliseconds) [info] - image datasource test: read png image (70 milliseconds) [info] - image datasource test: read non image (257 milliseconds) [info] - image datasource partition test (205 milliseconds) [info] - readImages pixel values test (132 milliseconds) [info] IterativelyReweightedLeastSquaresSuite: [info] - IRLS against GLM with Binomial errors (226 milliseconds) [info] - IRLS against GLM with Poisson errors (234 milliseconds) [info] - IRLS against L1Regression (306 milliseconds) [info] TestingUtilsSuite: [info] - Comparing doubles using relative error. (8 milliseconds) [info] - Comparing doubles using absolute error. (2 milliseconds) [info] - Comparing vectors using relative error. (4 milliseconds) [info] - Comparing vectors using absolute error. (2 milliseconds) [info] - Comparing Matrices using absolute error. (4 milliseconds) [info] - Comparing Matrices using relative error. (6 milliseconds) [info] - SPARK-31400, catalogString distinguish Vectors in ml and mllib (2 milliseconds) [info] TreePointSuite: [info] - Kryo class register (6 milliseconds) [info] FMRegressorSuite: [info] - params (3 milliseconds) [info] - combineCoefficients (1 millisecond) [info] - splitCoefficients (1 millisecond) [info] - MSE with intercept and linear (3 seconds, 63 milliseconds) [info] - MSE with intercept but without linear (3 seconds, 22 milliseconds) [info] - MSE with linear but without intercept (3 seconds, 68 milliseconds) [info] - MSE without intercept or linear (2 seconds, 747 milliseconds) [info] - read/write (1 second, 444 milliseconds) [info] TopByKeyAggregatorSuite: [info] - topByKey with k < #items (708 milliseconds) [info] - topByKey with k > #items (326 milliseconds) [info] StandardScalerSuite: [info] - Standardization with dense input when means and stds are provided (308 milliseconds) [info] - Standardization with dense input (230 milliseconds) [info] - Standardization with sparse input when means and stds are provided (178 milliseconds) [info] - Standardization with sparse input (175 milliseconds) [info] - Standardization with constant input when means and stds are provided (52 milliseconds) [info] - Standardization with constant input (48 milliseconds) [info] - StandardScalerModel argument nulls are properly handled (4 milliseconds) [info] DecisionTreeRegressorSuite: [info] - Regression stump with 3-ary (ordered) categorical features (349 milliseconds) [info] - Regression stump with binary (ordered) categorical features (293 milliseconds) [info] - copied model must have the same parent (207 milliseconds) [info] - predictVariance (1 second, 218 milliseconds) [info] - Feature importance with toy data (230 milliseconds) [info] - prediction on single instance (244 milliseconds) [info] - model support predict leaf index (81 milliseconds) [info] - should support all NumericType labels and not support other types (1 second, 218 milliseconds) [info] - training with sample weights (7 seconds, 291 milliseconds) [info] - read/write (4 seconds, 704 milliseconds) [info] - SPARK-33398: Load DecisionTreeRegressionModel prior to Spark 3.0 (421 milliseconds) [info] RowMatrixSuite: [info] - size (89 milliseconds) [info] - empty rows (40 milliseconds) [info] - toBreeze (41 milliseconds) [info] - gram (51 milliseconds) [info] - getTreeAggregateIdealDepth (11 milliseconds) [info] - SPARK-33043: getTreeAggregateIdealDepth with unlimited driver size (3 milliseconds) [info] - similar columns (385 milliseconds) [info] - svd of a full-rank matrix (1 second, 805 milliseconds) [info] - svd of a low-rank matrix (123 milliseconds) [info] - validate k in svd (3 milliseconds) [info] - pca (549 milliseconds) [info] - multiply a local matrix (34 milliseconds) [info] - compute column summary statistics (44 milliseconds) [info] - QR Decomposition (287 milliseconds) [info] - dense vector covariance accuracy (SPARK-26158) (96 milliseconds) [info] - compute covariance (116 milliseconds) [info] - covariance matrix is symmetric (SPARK-10875) (68 milliseconds) [info] - QR decomposition should aware of empty partition (SPARK-16369) (239 milliseconds) [info] NaiveBayesSuite: [info] - model types (0 milliseconds) [info] - get, set params (1 millisecond) [info] - Naive Bayes Multinomial (514 milliseconds) [info] - Naive Bayes Bernoulli (630 milliseconds) [info] - detect negative values (322 milliseconds) [info] - detect non zero or one values in Bernoulli (279 milliseconds) [info] - model save/load: 2.0 to 2.0 (942 milliseconds) [info] - model save/load: 1.0 to 2.0 (491 milliseconds) [info] NNLSSuite: [info] - NNLS: exact solution cases (20 milliseconds) [info] - NNLS: nonnegativity constraint active (1 millisecond) [info] - NNLS: objective value test (2 milliseconds) [info] GradientDescentSuite: [info] - Assert the loss is decreasing. (653 milliseconds) [info] - Test the loss and gradient of first iteration with regularization. (341 milliseconds) [info] - iteration should end with convergence tolerance (198 milliseconds) [info] VectorSizeHintStreamingSuite: [info] - Test assemble vectors with size hint in streaming. (414 milliseconds) [info] PCASuite: [info] - Correct computing use a PCA wrapper (201 milliseconds) [info] - memory cost computation (0 milliseconds) [info] - number of features more than 65535 (488 milliseconds) [info] IdentifiableSuite: [info] - Identifiable (2 milliseconds) [info] NGramSuite: [info] - default behavior yields bigram features (539 milliseconds) [info] - NGramLength=4 yields length 4 n-grams (324 milliseconds) [info] - empty input yields empty output (328 milliseconds) [info] - input array < n yields empty output (323 milliseconds) [info] - read/write (296 milliseconds) [info] GeneralizedLinearRegressionSuite: [info] - export test data into CSV format !!! IGNORED !!! [info] - params (20 milliseconds) [info] - generalized linear regression: default params (593 milliseconds) [info] - prediction on single instance (622 milliseconds) [info] - generalized linear regression: gaussian family against glm (7 seconds, 914 milliseconds) [info] - generalized linear regression: gaussian family against glmnet (2 seconds, 489 milliseconds) [info] - generalized linear regression: binomial family against glm (8 seconds, 529 milliseconds) [info] - generalized linear regression: poisson family against glm (8 seconds, 129 milliseconds) [info] - generalized linear regression: poisson family against glm (with zero values) (521 milliseconds) [info] - generalized linear regression: gamma family against glm (7 seconds, 763 milliseconds) [info] - generalized linear regression: tweedie family against glm (6 seconds, 167 milliseconds) [info] - generalized linear regression: tweedie family against glm (default power link) (3 seconds, 785 milliseconds) [info] - generalized linear regression: intercept only (1 second, 523 milliseconds) [info] - generalized linear regression with weight and offset (5 seconds, 443 milliseconds) [info] - glm summary: gaussian family with weight and offset (603 milliseconds) [info] - glm summary: binomial family with weight and offset (579 milliseconds) [info] - glm summary: poisson family with weight and offset (547 milliseconds) [info] - glm summary: gamma family with weight and offset (611 milliseconds) [info] - glm summary: tweedie family with weight and offset (1 second, 234 milliseconds) [info] - glm handle collinear features (95 milliseconds) [info] - read/write (1 second, 793 milliseconds) [info] - should support all NumericType labels and weights, and not support other types (933 milliseconds) [info] - glm accepts Dataset[LabeledPoint] (419 milliseconds) [info] - glm summary: feature name (249 milliseconds) [info] - glm summary: coefficient with statistics (309 milliseconds) [info] - generalized linear regression: regularization parameter (512 milliseconds) [info] - evaluate with labels that are not doubles (387 milliseconds) [info] - SPARK-23131 Kryo raises StackOverflow during serializing GLR model (92 milliseconds) [info] PMMLModelExportFactorySuite: [info] - PMMLModelExportFactory create KMeansPMMLModelExport when passing a KMeansModel (5 milliseconds) [info] - PMMLModelExportFactory create GeneralizedLinearPMMLModelExport when passing a LinearRegressionModel, RidgeRegressionModel or LassoModel (1 millisecond) [info] - PMMLModelExportFactory create BinaryClassificationPMMLModelExport when passing a LogisticRegressionModel or SVMModel (3 milliseconds) [info] - PMMLModelExportFactory throw IllegalArgumentException when passing a Multinomial Logistic Regression (1 millisecond) [info] - PMMLModelExportFactory throw IllegalArgumentException when passing an unsupported model (1 millisecond) [info] TokenizerSuite: [info] - params (2 milliseconds) [info] - read/write (298 milliseconds) [info] UnivariateFeatureSelectorSuite: [info] - params (0 milliseconds) [info] - Test numTopFeatures (1 second, 867 milliseconds) [info] - Test percentile (1 second, 516 milliseconds) [info] - Test fpr (1 second, 553 milliseconds) [info] - Test fdr (2 seconds, 388 milliseconds) [info] - Test fwe (1 second, 484 milliseconds) [info] - Test selectIndicesFromPValues f_classif (796 milliseconds) [info] - Test selectIndicesFromPValues f_regression (928 milliseconds) [info] - read/write (1 second, 288 milliseconds) [info] RandomForestRegressorSuite: [info] - Regression with continuous features: comparing DecisionTree vs. RandomForest(numTrees = 1) (776 milliseconds) [info] - Regression with continuous features and node Id cache : comparing DecisionTree vs. RandomForest(numTrees = 1) (703 milliseconds) [info] - prediction on single instance (497 milliseconds) [info] - Feature importance with toy data (185 milliseconds) [info] - model support predict leaf index (95 milliseconds) [info] - should support all NumericType labels and not support other types (1 second, 284 milliseconds) [info] - tree params (346 milliseconds) [info] - training with sample weights (18 seconds, 45 milliseconds) [info] - read/write (2 seconds, 328 milliseconds) [info] - SPARK-33398: Load RandomForestRegressionModel prior to Spark 3.0 (528 milliseconds) [info] BinaryClassificationMetricsSuite: [info] - binary evaluation metrics (256 milliseconds) [info] - binary evaluation metrics with weights (287 milliseconds) [info] - binary evaluation metrics for RDD where all examples have positive label (218 milliseconds) [info] - binary evaluation metrics for RDD where all examples have negative label (218 milliseconds) [info] - binary evaluation metrics with downsampling (145 milliseconds) [info] CountVectorizerSuite: [info] - params (8 milliseconds) [info] - CountVectorizerModel common cases (427 milliseconds) [info] - CountVectorizer common cases (447 milliseconds) [info] - CountVectorizer vocabSize and minDF (962 milliseconds) [info] - CountVectorizer maxDF (229 milliseconds) [info] - CountVectorizer using both minDF and maxDF (257 milliseconds) [info] - CountVectorizerModel with minTF count (322 milliseconds) [info] - CountVectorizerModel with minTF freq (323 milliseconds) [info] - CountVectorizerModel and CountVectorizer with binary (714 milliseconds) [info] - CountVectorizer read/write (289 milliseconds) [info] - CountVectorizerModel read/write (870 milliseconds) [info] - SPARK-22974: CountVectorModel should attach proper attribute to output column (37 milliseconds) [info] - SPARK-32662: Test on empty dataset (58 milliseconds) [info] - SPARK-32662: Remove requirement for minimum vocabulary size (1 second, 201 milliseconds) [info] NormalizerSuite: [info] - Normalization using L1 distance (57 milliseconds) [info] - Normalization using L2 distance (36 milliseconds) [info] - Normalization using L^Inf distance. (36 milliseconds) [info] RidgeRegressionClusterSuite: [info] - task size should be small in both training and prediction (5 seconds, 557 milliseconds) [info] ALSSuite: [info] - rank-1 matrices (1 second, 306 milliseconds) [info] - rank-1 matrices bulk (1 second, 357 milliseconds) [info] - rank-2 matrices (1 second, 244 milliseconds) [info] - rank-2 matrices bulk (1 second, 509 milliseconds) [info] - rank-1 matrices implicit (1 second, 965 milliseconds) [info] - rank-1 matrices implicit bulk (2 seconds, 497 milliseconds) [info] - rank-2 matrices implicit (2 seconds, 96 milliseconds) [info] - rank-2 matrices implicit bulk (2 seconds, 147 milliseconds) [info] - rank-2 matrices implicit negative (1 second, 832 milliseconds) [info] - rank-2 matrices with different user and product blocks (1 second, 389 milliseconds) [info] - pseudorandomness (1 second, 87 milliseconds) [info] - Storage Level for RDDs in model (613 milliseconds) [info] - negative ids (1 second, 290 milliseconds) [info] - NNALS, rank 2 (1 second, 314 milliseconds) [info] - SPARK-18268: ALS with empty RDD should fail with better message (27 milliseconds) [info] ALSCleanerSuite: [info] - ALS shuffle cleanup in algorithm (2 seconds, 602 milliseconds) [info] RFormulaSuite: [info] - params (1 millisecond) [info] - transform numeric data (469 milliseconds) [info] - features column already exists (22 milliseconds) [info] - label column already exists and forceIndexLabel was set with false (441 milliseconds) [info] - label column already exists but forceIndexLabel was set with true (5 milliseconds) [info] - label column already exists but is not numeric type (53 milliseconds) [info] - allow missing label column for test datasets (351 milliseconds) [info] - allow empty label (413 milliseconds) [info] - encodes string terms (777 milliseconds) [info] - encodes string terms with string indexer order type (2 seconds, 214 milliseconds) [info] - test consistency with R when encoding string terms (527 milliseconds) [info] - formula w/o intercept, we should output reference category when encoding string terms (1 second, 397 milliseconds) [info] - index string label (806 milliseconds) [info] - force to index label even it is numeric type (785 milliseconds) [info] - attribute generation (549 milliseconds) [info] - vector attribute generation (480 milliseconds) [info] - vector attribute generation with unnamed input attrs (376 milliseconds) [info] - numeric interaction (466 milliseconds) [info] - factor numeric interaction (541 milliseconds) [info] - factor factor interaction (1 second, 183 milliseconds) [info] - read/write: RFormula (287 milliseconds) [info] - read/write: RFormulaModel (6 seconds, 632 milliseconds) [info] - should support all NumericType labels (196 milliseconds) [info] - handle unseen features or labels (4 seconds, 627 milliseconds) [info] - Use Vectors as inputs to formula. (790 milliseconds) [info] - SPARK-23562 RFormula handleInvalid should handle invalid values in non-string columns. (1 second, 41 milliseconds) [info] QuantileDiscretizerSuite: [info] - Test observed number of buckets and their sizes match expected values (5 seconds, 486 milliseconds) [info] - Test on data with high proportion of duplicated values (839 milliseconds) [info] - Test transform on data with NaN value (1 second, 125 milliseconds) [info] - Test transform method on unseen data (684 milliseconds) [info] - read/write (368 milliseconds) [info] - Verify resulting model has parent (64 milliseconds) [info] - Multiple Columns: Test observed number of buckets and their sizes match expected values (10 seconds, 628 milliseconds) [info] - Multiple Columns: Test on data with high proportion of duplicated values (927 milliseconds) [info] - Multiple Columns: Test transform on data with NaN value (1 second, 168 milliseconds) [info] - Multiple Columns: Test numBucketsArray (461 milliseconds) [info] - Multiple Columns: Compare single/multiple column(s) QuantileDiscretizer in pipeline (584 milliseconds) [info] - Multiple Columns: Comparing setting numBuckets with setting numBucketsArray explicitly with identical values (438 milliseconds) [info] - Multiple Columns: read/write (344 milliseconds) [info] - Multiple Columns: Mismatched sizes of inputCols/outputCols (11 milliseconds) [info] - Multiple Columns: Mismatched sizes of inputCols/numBucketsArray (8 milliseconds) [info] - Multiple Columns: Set both of numBuckets/numBucketsArray (7 milliseconds) [info] - Setting numBucketsArray for Single-Column QuantileDiscretizer (9 milliseconds) [info] - Assert exception is thrown if both multi-column and single-column params are set (7 milliseconds) [info] - Setting inputCol without setting outputCol (267 milliseconds) [info] - [SPARK-31676] QuantileDiscretizer raise error parameter splits given invalid value (65 milliseconds) [info] NormalizerSuite: [info] - Normalization with default parameter (482 milliseconds) [info] - Normalization with setter (310 milliseconds) [info] - read/write (290 milliseconds) [info] ChiSqSelectorSuite: [info] - ChiSqSelector transform by numTopFeatures test (sparse & dense vector) (154 milliseconds) [info] - ChiSqSelector transform by Percentile test (sparse & dense vector) (110 milliseconds) [info] - ChiSqSelector transform by FPR test (sparse & dense vector) (107 milliseconds) [info] - ChiSqSelector transform by FDR test (sparse & dense vector) (110 milliseconds) [info] - ChiSqSelector transform by FWE test (sparse & dense vector) (110 milliseconds) [info] - model load / save (487 milliseconds) [info] GBTRegressorSuite: [info] - Regression with continuous features (18 seconds, 831 milliseconds) [info] - GBTRegressor behaves reasonably on toy data (718 milliseconds) [info] - prediction on single instance (506 milliseconds) [info] - Checkpointing (804 milliseconds) [info] - model support predict leaf index (112 milliseconds) [info] - should support all NumericType labels and not support other types (9 seconds, 706 milliseconds) [info] - Feature importance with toy data (438 milliseconds) [info] - Tests of feature subset strategy (955 milliseconds) [info] - model evaluateEachIteration (1 second, 654 milliseconds) [info] - runWithValidation stops early and performs better on a validation dataset (6 seconds, 86 milliseconds) [info] - tree params (2 seconds, 304 milliseconds) [info] - training with sample weights (36 seconds, 549 milliseconds) [info] - model save/load (4 seconds, 808 milliseconds) [info] - SPARK-33398: Load GBTRegressionModel prior to Spark 3.0 (548 milliseconds) [info] InteractionSuite: [info] - params (1 millisecond) [info] - feature encoder (7 milliseconds) [info] - numeric interaction (392 milliseconds) [info] - nominal interaction (324 milliseconds) [info] - default attr names (100 milliseconds) [info] - read/write (293 milliseconds) [info] ImpuritySuite: [info] - Gini impurity does not support negative labels (1 millisecond) [info] - Entropy does not support negative labels (1 millisecond) [info] - Classification impurities are insensitive to scaling (3 milliseconds) [info] - Regression impurities are insensitive to scaling (7 milliseconds) [info] BisectingKMeansSuite: [info] - default values (1 millisecond) [info] - setter/getter (2 milliseconds) [info] - 1D data (172 milliseconds) [info] - points are the same (45 milliseconds) [info] - more desired clusters than points (171 milliseconds) [info] - min divisible cluster (275 milliseconds) [info] - larger clusters get selected first (113 milliseconds) [info] - 2D data (291 milliseconds) [info] - BisectingKMeans model save/load (585 milliseconds) [info] KMeansPMMLModelExportSuite: [info] - KMeansPMMLModelExport generate PMML format (2 milliseconds) [info] GradientBoostedTreesSuite: [info] - Regression with continuous features: SquaredError (4 seconds, 690 milliseconds) [info] - Regression with continuous features: Absolute Error (4 seconds, 621 milliseconds) [info] - Binary classification with continuous features: Log Loss (5 seconds, 458 milliseconds) [info] - SPARK-5496: BoostingStrategy.defaultParams should recognize Classification (2 milliseconds) [info] - model save/load (1 second, 203 milliseconds) [info] - Checkpointing (652 milliseconds) [info] IDFSuite: [info] - idf (68 milliseconds) [info] - idf minimum document frequency filtering (34 milliseconds) [info] VectorUDTSuite: [info] - preloaded VectorUDT (2 milliseconds) [info] - JavaTypeInference with VectorUDT (73 milliseconds) [info] HuberAggregatorSuite: [info] - aggregator add method should check input size (77 milliseconds) [info] - negative weight (21 milliseconds) [info] - check sizes (42 milliseconds) [info] - check correctness (53 milliseconds) [info] - check with zero standard deviation (36 milliseconds) [info] RegexTokenizerSuite: [info] - params (7 milliseconds) [info] - RegexTokenizer (1 second, 174 milliseconds) [info] - RegexTokenizer with toLowercase false (315 milliseconds) [info] - read/write (286 milliseconds) [info] ClassifierSuite: [info] - extractLabeledPoints (229 milliseconds) [info] - getNumClasses (305 milliseconds) [info] BinaryClassificationPMMLModelExportSuite: [info] - logistic regression PMML export (2 milliseconds) [info] - linear SVM PMML export (1 millisecond) [info] BLASSuite: [info] - nativeL1Threshold (0 milliseconds) [info] - copy (3 milliseconds) [info] - scal (0 milliseconds) [info] - axpy (4 milliseconds) [info] - dot (1 millisecond) [info] - spr (1 millisecond) [info] - syr (6 milliseconds) [info] - gemm (4 milliseconds) [info] - gemv (4 milliseconds) [info] ChiSqSelectorSuite: [info] - params (6 milliseconds) [info] - Test Chi-Square selector: numTopFeatures (645 milliseconds) [info] - Test Chi-Square selector: percentile (540 milliseconds) [info] - Test Chi-Square selector: fpr (505 milliseconds) [info] - Test Chi-Square selector: fdr (896 milliseconds) [info] - Test Chi-Square selector: fwe (490 milliseconds) [info] - read/write (1 second, 261 milliseconds) [info] - should support all NumericType labels and not support other types (1 second, 171 milliseconds) [info] - SPARK-25289: ChiSqSelector should not fail when selecting no features with FDR (306 milliseconds) [info] DifferentiableRegularizationSuite: [info] - L2 regularization (6 milliseconds) [info] RidgeRegressionSuite: [info] - ridge regression can help avoid overfitting (4 seconds, 463 milliseconds) [info] - model save/load (503 milliseconds) [info] LBFGSSuite: [info] - LBFGS loss should be decreasing and match the result of Gradient Descent. (3 seconds, 52 milliseconds) [info] - LBFGS and Gradient Descent with L2 regularization should get the same result. (2 seconds, 926 milliseconds) [info] - The convergence criteria should work as we expect. (1 second, 62 milliseconds) [info] - Optimize via class LBFGS. (3 seconds, 115 milliseconds) [info] - SPARK-18471: LBFGS aggregator on empty partitions (75 milliseconds) [info] GradientSuite: [info] - Gradient computation against numerical differentiation (7 milliseconds) [info] KMeansSuite: [info] - default parameters (847 milliseconds) [info] - set parameters (1 millisecond) [info] - parameters validation (3 milliseconds) [info] - fit, transform and summary (1 second, 162 milliseconds) [info] - KMeansModel transform with non-default feature and prediction cols (491 milliseconds) [info] - KMeans using cosine distance (803 milliseconds) [info] - KMeans with cosine distance is not supported for 0-length vectors (131 milliseconds) [info] - KMean with Array input (1 second, 699 milliseconds) [info] - read/write (1 second, 723 milliseconds) [info] - pmml export (237 milliseconds) [info] - prediction on single instance (464 milliseconds) [info] - compare with weightCol and without weightCol (1 second, 347 milliseconds) [info] - Two centers with weightCol (1 second, 299 milliseconds) [info] - Four centers with weightCol (1 second, 478 milliseconds) [info] ClusteringEvaluatorSuite: [info] - params (3 milliseconds) [info] - read/write (267 milliseconds) [info] - squared euclidean Silhouette (1 second, 41 milliseconds) [info] - cosine Silhouette (947 milliseconds) [info] - number of clusters must be greater than one (180 milliseconds) [info] - SPARK-23568: we should use metadata to determine features number (286 milliseconds) [info] - SPARK-27896: single-element clusters should have silhouette score of 0 (458 milliseconds) [info] - getMetrics (697 milliseconds) [info] - test weight support (8 seconds, 191 milliseconds) [info] - single-element clusters with weight (576 milliseconds) [info] PolynomialExpansionSuite: [info] - params (5 milliseconds) [info] - Polynomial expansion with default parameter (452 milliseconds) [info] - Polynomial expansion with setter (324 milliseconds) [info] - Polynomial expansion with degree 1 is identity on vectors (345 milliseconds) [info] - read/write (294 milliseconds) [info] - SPARK-17027. Integer overflow in PolynomialExpansion.getPolySize (716 milliseconds) [info] HingeAggregatorSuite: [info] - aggregator add method input size (25 milliseconds) [info] - negative weight (20 milliseconds) [info] - check sizes (42 milliseconds) [info] - check correctness (67 milliseconds) [info] - check with zero standard deviation (38 milliseconds) [info] StopWordsRemoverSuite: [info] - StopWordsRemover default (543 milliseconds) [info] - StopWordsRemover with particular stop words list (327 milliseconds) [info] - StopWordsRemover with localed input (case insensitive) (350 milliseconds) [info] - StopWordsRemover with localed input (case sensitive) (342 milliseconds) [info] - StopWordsRemover with invalid locale (4 milliseconds) [info] - StopWordsRemover case sensitive (324 milliseconds) [info] - default stop words of supported languages are not empty (5 milliseconds) [info] - StopWordsRemover with language selection (316 milliseconds) [info] - StopWordsRemover with ignored words (319 milliseconds) [info] - StopWordsRemover with additional words (346 milliseconds) [info] - read/write (565 milliseconds) [info] - StopWordsRemover output column already exists (40 milliseconds) [info] - SPARK-28365: Fallback to en_US if default locale isn't in available locales (3 milliseconds) [info] - Multiple Columns: StopWordsRemover default (99 milliseconds) [info] - Multiple Columns: StopWordsRemover with particular stop words list (46 milliseconds) [info] - Compare single/multiple column(s) StopWordsRemover in pipeline (73 milliseconds) [info] - Multiple Columns: Mismatched sizes of inputCols/outputCols (11 milliseconds) [info] - Multiple Columns: Set both of inputCol/inputCols (10 milliseconds) [info] LDASuite: [info] - LocalLDAModel (8 milliseconds) [info] - running and DistributedLDAModel with default Optimizer (EM) (788 milliseconds) [info] - vertex indexing (3 milliseconds) [info] - setter alias (1 millisecond) [info] - initializing with alpha length != k or 1 fails (1 millisecond) [info] - initializing with elements in alpha < 0 fails (2 milliseconds) [info] - OnlineLDAOptimizer initialization (25 milliseconds) [info] - OnlineLDAOptimizer one iteration (49 milliseconds) [info] - OnlineLDAOptimizer with toy data (3 seconds, 801 milliseconds) [info] - LocalLDAModel logLikelihood (36 milliseconds) [info] - LocalLDAModel logPerplexity (28 milliseconds) [info] - LocalLDAModel predict (84 milliseconds) [info] - OnlineLDAOptimizer with asymmetric prior (3 seconds, 782 milliseconds) [info] - OnlineLDAOptimizer alpha hyperparameter optimization (4 seconds, 102 milliseconds) [info] - model save/load (2 seconds, 531 milliseconds) [info] - EMLDAOptimizer with empty docs (313 milliseconds) [info] - OnlineLDAOptimizer with empty docs (187 milliseconds) [info] LinearRegressionSuite: [info] - linear regression (1 second, 75 milliseconds) [info] - linear regression without intercept (953 milliseconds) [info] - sparse linear regression without intercept (1 second, 229 milliseconds) [info] - model save/load (493 milliseconds) [info] PowerIterationClusteringSuite: [info] - default parameters (6 milliseconds) [info] - parameter validation (2 milliseconds) [info] - power iteration clustering (13 seconds, 552 milliseconds) [info] - supported input types (9 seconds, 612 milliseconds) [info] - invalid input: negative similarity (62 milliseconds) [info] - check for invalid input types of weight (6 milliseconds) [info] - test default weight (6 seconds, 593 milliseconds) [info] - power iteration clustering gives incorrect results due to failed to converge (2 seconds, 220 milliseconds) [info] - read/write (293 milliseconds) [info] OneVsRestSuite: [info] - params (11 milliseconds) [info] - one-vs-rest: default params (6 seconds, 222 milliseconds) [info] - one-vs-rest: tuning parallelism does not change output (6 seconds, 202 milliseconds) [info] - one-vs-rest: pass label metadata correctly during train (995 milliseconds) [info] - SPARK-8092: ensure label features and prediction cols are configurable (4 seconds, 7 milliseconds) [info] - SPARK-18625 : OneVsRestModel should support setFeaturesCol and setPredictionCol (3 seconds, 501 milliseconds) [info] - SPARK-8049: OneVsRest shouldn't output temp columns (1 second, 164 milliseconds) [info] - SPARK-21306: OneVsRest should support setWeightCol (14 seconds, 848 milliseconds) [info] - SPARK-34045: OneVsRestModel.transform should not call setter of submodels (1 second, 302 milliseconds) [info] - OneVsRest.copy and OneVsRestModel.copy (1 second, 962 milliseconds) [info] - read/write: OneVsRest (649 milliseconds) [info] - read/write: OneVsRestModel (5 seconds, 363 milliseconds) [info] - should ignore empty output cols (1 second, 361 milliseconds) [info] - should support all NumericType labels and not support other types (4 seconds, 386 milliseconds) [info] LBFGSClusterSuite: [info] - task size should be small (5 seconds, 217 milliseconds) [info] VectorSlicerSuite: [info] - params (10 milliseconds) [info] - feature validity checks (1 millisecond) [info] - Test vector slicer (1 second, 385 milliseconds) [info] - read/write (339 milliseconds) [info] RandomForestSuite: [info] - Binary classification with continuous features: comparing DecisionTree vs. RandomForest(numTrees = 1) (566 milliseconds) [info] - Binary classification with continuous features and node Id cache : comparing DecisionTree vs. RandomForest(numTrees = 1) (579 milliseconds) [info] - Regression with continuous features: comparing DecisionTree vs. RandomForest(numTrees = 1) (522 milliseconds) [info] - Regression with continuous features and node Id cache : comparing DecisionTree vs. RandomForest(numTrees = 1) (589 milliseconds) [info] - alternating categorical and continuous features with multiclass labels to test indexing (159 milliseconds) [info] - subsampling rate in RandomForest (352 milliseconds) [info] - model save/load (1 second, 119 milliseconds) [info] MLSerDeSuite: [info] - pickle vector (3 milliseconds) [info] - pickle double (1 millisecond) [info] - pickle matrix (2 milliseconds) [info] StreamingLinearRegressionSuite: [info] - parameter accuracy (5 seconds, 223 milliseconds) [info] - parameter convergence (3 seconds, 527 milliseconds) [info] - predictions (394 milliseconds) [info] - training and prediction (3 seconds, 408 milliseconds) [info] - handling empty RDDs in a stream (642 milliseconds) [info] RandomForestClassifierSuite: [info] - params (14 milliseconds) [info] - Binary classification with continuous features: comparing DecisionTree vs. RandomForest(numTrees = 1) (887 milliseconds) [info] - Binary classification with continuous features and node Id cache: comparing DecisionTree vs. RandomForest(numTrees = 1) (835 milliseconds) [info] - alternating categorical and continuous features with multiclass labels to test indexing (486 milliseconds) [info] - subsampling rate in RandomForest (1 second, 192 milliseconds) [info] - predictRaw and predictProbability (5 seconds, 11 milliseconds) [info] - prediction on single instance (698 milliseconds) [info] - Fitting without numClasses in metadata (507 milliseconds) [info] - Feature importance with toy data (360 milliseconds) [info] - model support predict leaf index (96 milliseconds) [info] - should support all NumericType labels and not support other types (2 seconds, 268 milliseconds) [info] - tree params (789 milliseconds) [info] - training with sample weights (17 seconds, 699 milliseconds) [info] - summary for binary and multiclass (3 seconds, 194 milliseconds) [info] - read/write (2 seconds, 487 milliseconds) [info] - SPARK-33398: Load RandomForestClassificationModel prior to Spark 3.0 (560 milliseconds) [info] PrefixSpanSuite: [info] - PrefixSpan projections with multiple partial starts (369 milliseconds) [info] - PrefixSpan Integer type, variable-size itemsets (228 milliseconds) [info] - PrefixSpan input row with nulls (228 milliseconds) [info] - PrefixSpan String type, variable-size itemsets (356 milliseconds) [info] StringIndexerSuite: [info] - params (1 millisecond) [info] - params: input/output columns (31 milliseconds) [info] - StringIndexer (622 milliseconds) [info] - StringIndexer.transformSchema) (1 millisecond) [info] - StringIndexer.transformSchema multi col (0 milliseconds) [info] - StringIndexerUnseen (1 second, 75 milliseconds) [info] - StringIndexer with a numeric input column (448 milliseconds) [info] - StringIndexer with NULLs (1 second, 201 milliseconds) [info] - StringIndexerModel should keep silent if the input column does not exist. (399 milliseconds) [info] - StringIndexerModel can't overwrite output column (123 milliseconds) [info] - StringIndexer read/write (288 milliseconds) [info] - StringIndexerModel read/write (865 milliseconds) [info] - IndexToString params (4 milliseconds) [info] - IndexToString.transform (650 milliseconds) [info] - StringIndexer, IndexToString are inverses (451 milliseconds) [info] - IndexToString.transformSchema (SPARK-10573) (0 milliseconds) [info] - IndexToString read/write (284 milliseconds) [info] - SPARK 18698: construct IndexToString with custom uid (1 millisecond) [info] - StringIndexer metadata (389 milliseconds) [info] - StringIndexer order types (1 second, 627 milliseconds) [info] - StringIndexer order types: secondary sort by alphabets when frequency equal (298 milliseconds) [info] - SPARK-22446: StringIndexerModel's indexer UDF should not apply on filtered data (456 milliseconds) [info] - StringIndexer multiple input columns (298 milliseconds) [info] - Correctly skipping NULL and NaN values (114 milliseconds) [info] - Load StringIndexderModel prior to Spark 3.0 (295 milliseconds) [info] SummarizerSuite: [info] - no element (173 milliseconds) [info] - single element - mean only (113 milliseconds) [info] - single element - mean only w/o weight (88 milliseconds) [info] - single element - sum only (77 milliseconds) [info] - single element - sum only w/o weight (79 milliseconds) [info] - single element - variance only (76 milliseconds) [info] - single element - variance only w/o weight (80 milliseconds) [info] - single element - std only (77 milliseconds) [info] - single element - std only w/o weight (80 milliseconds) [info] - single element - count only (76 milliseconds) [info] - single element - count only w/o weight (74 milliseconds) [info] - single element - numNonZeros only (80 milliseconds) [info] - single element - numNonZeros only w/o weight (76 milliseconds) [info] - single element - min only (116 milliseconds) [info] - single element - min only w/o weight (82 milliseconds) [info] - single element - max only (71 milliseconds) [info] - single element - max only w/o weight (73 milliseconds) [info] - single element - normL1 only (72 milliseconds) [info] - single element - normL1 only w/o weight (75 milliseconds) [info] - single element - normL2 only (74 milliseconds) [info] - single element - normL2 only w/o weight (76 milliseconds) [info] - single element - multiple metrics at once (99 milliseconds) [info] - single element - multiple metrics at once w/o weight (76 milliseconds) [info] - multiple elements (dense) - mean only (77 milliseconds) [info] - multiple elements (dense) - mean only w/o weight (78 milliseconds) [info] - multiple elements (dense) - sum only (75 milliseconds) [info] - multiple elements (dense) - sum only w/o weight (78 milliseconds) [info] - multiple elements (dense) - variance only (74 milliseconds) [info] - multiple elements (dense) - variance only w/o weight (78 milliseconds) [info] - multiple elements (dense) - std only (76 milliseconds) [info] - multiple elements (dense) - std only w/o weight (81 milliseconds) [info] - multiple elements (dense) - count only (72 milliseconds) [info] - multiple elements (dense) - count only w/o weight (74 milliseconds) [info] - multiple elements (dense) - numNonZeros only (77 milliseconds) [info] - multiple elements (dense) - numNonZeros only w/o weight (77 milliseconds) [info] - multiple elements (dense) - min only (78 milliseconds) [info] - multiple elements (dense) - min only w/o weight (78 milliseconds) [info] - multiple elements (dense) - max only (75 milliseconds) [info] - multiple elements (dense) - max only w/o weight (79 milliseconds) [info] - multiple elements (dense) - normL1 only (75 milliseconds) [info] - multiple elements (dense) - normL1 only w/o weight (79 milliseconds) [info] - multiple elements (dense) - normL2 only (74 milliseconds) [info] - multiple elements (dense) - normL2 only w/o weight (79 milliseconds) [info] - multiple elements (dense) - multiple metrics at once (76 milliseconds) [info] - multiple elements (dense) - multiple metrics at once w/o weight (80 milliseconds) [info] - multiple elements (sparse) - mean only (120 milliseconds) [info] - multiple elements (sparse) - mean only w/o weight (96 milliseconds) [info] - multiple elements (sparse) - sum only (88 milliseconds) [info] - multiple elements (sparse) - sum only w/o weight (96 milliseconds) [info] - multiple elements (sparse) - variance only (92 milliseconds) [info] - multiple elements (sparse) - variance only w/o weight (99 milliseconds) [info] - multiple elements (sparse) - std only (92 milliseconds) [info] - multiple elements (sparse) - std only w/o weight (98 milliseconds) [info] - multiple elements (sparse) - count only (78 milliseconds) [info] - multiple elements (sparse) - count only w/o weight (71 milliseconds) [info] - multiple elements (sparse) - numNonZeros only (82 milliseconds) [info] - multiple elements (sparse) - numNonZeros only w/o weight (97 milliseconds) [info] - multiple elements (sparse) - min only (92 milliseconds) [info] - multiple elements (sparse) - min only w/o weight (98 milliseconds) [info] - multiple elements (sparse) - max only (97 milliseconds) [info] - multiple elements (sparse) - max only w/o weight (97 milliseconds) [info] - multiple elements (sparse) - normL1 only (92 milliseconds) [info] - multiple elements (sparse) - normL1 only w/o weight (98 milliseconds) [info] - multiple elements (sparse) - normL2 only (94 milliseconds) [info] - multiple elements (sparse) - normL2 only w/o weight (97 milliseconds) [info] - multiple elements (sparse) - multiple metrics at once (98 milliseconds) [info] - multiple elements (sparse) - multiple metrics at once w/o weight (86 milliseconds) [info] - summarizer buffer basic error handing (11 milliseconds) [info] - summarizer buffer dense vector input (0 milliseconds) [info] - summarizer buffer sparse vector input (0 milliseconds) [info] - summarizer buffer mixing dense and sparse vector input (1 millisecond) [info] - summarizer buffer merging two summarizers (0 milliseconds) [info] - summarizer buffer zero variance test (SPARK-21818) (0 milliseconds) [info] - summarizer buffer merging summarizer with empty summarizer (0 milliseconds) [info] - summarizer buffer merging summarizer when one side has zero mean (SPARK-4355) (0 milliseconds) [info] - summarizer buffer merging summarizer with weighted samples (1 millisecond) [info] - summarizer buffer test min/max with weighted samples (2 milliseconds) [info] - support new metrics: sum, std, numFeatures, sumL2, weightSum (3 milliseconds) [info] - performance test !!! IGNORED !!! [info] BinarizerSuite: [info] - params (10 milliseconds) [info] - Binarize continuous features with default parameter (378 milliseconds) [info] - Binarize continuous features with setter (26 milliseconds) [info] - Binarize vector of continuous features with default parameter (70 milliseconds) [info] - Binarize vector of continuous features with setter (32 milliseconds) [info] - Binarizer should support sparse vector with negative threshold (33 milliseconds) [info] - read/write (578 milliseconds) [info] - Multiple Columns: Test thresholds (95 milliseconds) [info] - Multiple Columns: Comparing setting threshold with setting thresholds explicitly with identical values (64 milliseconds) [info] - Multiple Columns: Mismatched sizes of inputCols/outputCols (14 milliseconds) [info] - Multiple Columns: Mismatched sizes of inputCols/thresholds (6 milliseconds) [info] - Multiple Columns: Mismatched sizes of inputCol/thresholds (6 milliseconds) [info] - Multiple Columns: Set both of threshold/thresholds (5 milliseconds) [info] GaussianMixtureSuite: [info] - gmm fails on high dimensional data (91 milliseconds) [info] - single cluster (568 milliseconds) [info] - two clusters (105 milliseconds) [info] - two clusters with distributed decompositions (289 milliseconds) [info] - single cluster with sparse data (575 milliseconds) [info] - two clusters with sparse data (117 milliseconds) [info] - model save / load (750 milliseconds) [info] - model prediction, parallel and local (217 milliseconds) [info] IDFSuite: [info] - params (9 milliseconds) [info] - compute IDF with default parameter (493 milliseconds) [info] - compute IDF with setter (362 milliseconds) [info] - IDF read/write (296 milliseconds) [info] - IDFModel read/write (925 milliseconds) [info] ANNSuite: [info] - ANN with Sigmoid learns XOR function with LBFGS optimizer (324 milliseconds) [info] - ANN with SoftMax learns XOR function with 2-bit output and batch GD optimizer (771 milliseconds) [info] FeatureHasherSuite: [info] - params (8 milliseconds) [info] - specify input cols using varargs or array (1 millisecond) [info] - feature hashing (448 milliseconds) [info] - setting explicit numerical columns to treat as categorical (52 milliseconds) [info] - hashing works for all numeric types (888 milliseconds) [info] - invalid input type should fail (9 milliseconds) [info] - hash collisions sum feature values (44 milliseconds) [info] - ignores null values in feature hashing (35 milliseconds) [info] - unicode column names and values (45 milliseconds) [info] - read/write (303 milliseconds) [info] MatrixUDTSuite: [info] - preloaded MatrixUDT (3 milliseconds) [info] VectorAssemblerSuite: [info] - params (0 milliseconds) [info] - assemble (3 milliseconds) [info] - assemble should compress vectors (0 milliseconds) [info] - VectorAssembler (76 milliseconds) [info] - transform should throw an exception in case of unsupported type (11 milliseconds) [info] - ML attributes (55 milliseconds) [info] - read/write (302 milliseconds) [info] - SPARK-22446: VectorAssembler's UDF should not apply on filtered data (199 milliseconds) [info] - assemble should keep nulls when keepInvalid is true (1 millisecond) [info] - assemble should throw errors when keepInvalid is false (2 milliseconds) [info] - get lengths functions (276 milliseconds) [info] - Handle Invalid should behave properly (638 milliseconds) [info] - SPARK-25371: VectorAssembler with empty inputCols (24 milliseconds) [info] - SPARK-31671: should give explicit error message when can not infer column lengths (17 milliseconds) [info] FMClassifierSuite: [info] - params (2 milliseconds) [info] - FMClassifier: Predictor, Classifier methods (6 seconds, 235 milliseconds) [info] - check logisticLoss with AdamW (1 second, 377 milliseconds) [info] - check logisticLoss with GD (2 seconds, 520 milliseconds) [info] - sparse datasets (395 milliseconds) [info] - setThreshold, getThreshold (2 milliseconds) [info] - thresholds prediction (6 seconds, 267 milliseconds) [info] - FMClassifier doesn't fit intercept when fitIntercept is off (2 seconds, 519 milliseconds) [info] - FMClassifier doesn't fit linear when fitLinear is off (2 seconds, 562 milliseconds) [info] - prediction on single instance (2 seconds, 761 milliseconds) [info] - summary and training summary (924 milliseconds) [info] - FMClassifier training summary totalIterations (4 seconds, 312 milliseconds) [info] - read/write (1 second, 390 milliseconds) [info] WeightedLeastSquaresSuite: [info] - WLS with strong L1 regularization (46 milliseconds) [info] - diagonal inverse of AtWA (38 milliseconds) [info] - two collinear features (262 milliseconds) [info] - WLS against lm (194 milliseconds) [info] - WLS against lm when label is constant and no regularization (226 milliseconds) [info] - WLS with regularization when label is constant (45 milliseconds) [info] - WLS against glmnet with constant features (391 milliseconds) [info] - WLS against glmnet with L1/ElasticNet regularization (633 milliseconds) [info] - WLS against glmnet with L2 regularization (583 milliseconds) [info] ElementwiseProductSuite: [info] - elementwise (hadamard) product should properly apply vector to dense data set (36 milliseconds) [info] - elementwise (hadamard) product should properly apply vector to sparse data set (39 milliseconds) [info] BaggedPointSuite: [info] - BaggedPoint RDD: without subsampling with weights (58 milliseconds) [info] - BaggedPoint RDD: with subsampling with replacement (fraction = 1.0) (373 milliseconds) [info] - BaggedPoint RDD: with subsampling with replacement (fraction = 0.5) (202 milliseconds) [info] - BaggedPoint RDD: with subsampling without replacement (fraction = 1.0) (300 milliseconds) [info] - BaggedPoint RDD: with subsampling without replacement (fraction = 0.5) (181 milliseconds) [info] MultivariateOnlineSummarizerSuite: [info] - basic error handing (15 milliseconds) [info] - dense vector input (1 millisecond) [info] - sparse vector input (0 milliseconds) [info] - mixing dense and sparse vector input (1 millisecond) [info] - merging two summarizers (0 milliseconds) [info] - merging summarizer with empty summarizer (1 millisecond) [info] - merging summarizer when one side has zero mean (SPARK-4355) (0 milliseconds) [info] - merging summarizer with weighted samples (1 millisecond) [info] - test min/max with weighted samples (SPARK-16561) (2 milliseconds) [info] - test zero variance (SPARK-21818) (0 milliseconds) [info] LassoSuite: [info] - Lasso local random SGD (871 milliseconds) [info] - Lasso local random SGD with initial weights (883 milliseconds) [info] - model save/load (469 milliseconds) [info] MLPairRDDFunctionsSuite: [info] - topByKey (47 milliseconds) [info] LogisticAggregatorSuite: [info] - aggregator add method input size (33 milliseconds) [info] - negative weight (21 milliseconds) [info] - check sizes multinomial (50 milliseconds) [info] - check sizes binomial (43 milliseconds) [info] - check correctness multinomial (158 milliseconds) [info] - check correctness binomial (182 milliseconds) [info] - check with zero standard deviation (77 milliseconds) [info] FPGrowthSuite: [info] - FP-Growth using String type (377 milliseconds) [info] - FP-Growth String type association rule generation (101 milliseconds) [info] - FP-Growth using Int type (347 milliseconds) [info] - model save/load with String type (754 milliseconds) [info] - model save/load with Int type (680 milliseconds) [info] ReadWriteSuite: [info] - unsupported/non existent export formats (303 milliseconds) [info] - invalid paths fail (210 milliseconds) [info] - dummy export format is called (205 milliseconds) [info] - duplicate format raises error (216 milliseconds) [info] Test run started [info] Test org.apache.spark.ml.feature.JavaWord2VecSuite.testJavaWord2Vec started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.299s [info] Test run started [info] Test org.apache.spark.ml.attribute.JavaAttributeSuite.testBinaryAttribute started [info] Test org.apache.spark.ml.attribute.JavaAttributeSuite.testNominalAttribute started [info] Test org.apache.spark.ml.attribute.JavaAttributeSuite.testAttributeType started [info] Test org.apache.spark.ml.attribute.JavaAttributeSuite.testNumericAttribute started [info] Test run finished: 0 failed, 0 ignored, 4 total, 0.002s [info] Test run started [info] Test org.apache.spark.mllib.linalg.distributed.JavaRowMatrixSuite.rowMatrixQRDecomposition started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.156s [info] Test run started [info] Test org.apache.spark.ml.feature.JavaBucketizerSuite.bucketizerTest started [info] Test org.apache.spark.ml.feature.JavaBucketizerSuite.bucketizerMultipleColumnsTest started [info] Test run finished: 0 failed, 0 ignored, 2 total, 0.162s [info] Test run started [info] Test org.apache.spark.ml.JavaPipelineSuite.pipeline started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.84s [info] Test run started [info] Test org.apache.spark.mllib.fpm.JavaFPGrowthSuite.runFPGrowthSaveLoad started [info] Test org.apache.spark.mllib.fpm.JavaFPGrowthSuite.runFPGrowth started [info] Test run finished: 0 failed, 0 ignored, 2 total, 0.811s [info] Test run started [info] Test org.apache.spark.ml.stat.JavaSummarizerSuite.testSummarizer started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.196s [info] Test run started [info] Test org.apache.spark.mllib.evaluation.JavaRankingMetricsSuite.rankingMetrics started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.088s [info] Test run started [info] Test org.apache.spark.mllib.clustering.JavaGaussianMixtureSuite.runGaussianMixture started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.14s [info] Test run started [info] Test org.apache.spark.mllib.classification.JavaNaiveBayesSuite.testPredictJavaRDD started [info] Test org.apache.spark.mllib.classification.JavaNaiveBayesSuite.runUsingConstructor started [info] Test org.apache.spark.mllib.classification.JavaNaiveBayesSuite.runUsingStaticMethods started [info] Test org.apache.spark.mllib.classification.JavaNaiveBayesSuite.testModelTypeSetters started [info] Test run finished: 0 failed, 0 ignored, 4 total, 1.093s [info] Test run started [info] Test org.apache.spark.mllib.regression.JavaRidgeRegressionSuite.runRidgeRegressionUsingConstructor started [info] Test org.apache.spark.mllib.regression.JavaRidgeRegressionSuite.runRidgeRegressionUsingStaticMethods started [info] Test run finished: 0 failed, 0 ignored, 2 total, 9.244s [info] Test run started [info] Test org.apache.spark.ml.param.JavaParamsSuite.testParams started [info] Test org.apache.spark.ml.param.JavaParamsSuite.testParamValidate started [info] Test run finished: 0 failed, 0 ignored, 2 total, 0.007s [info] Test run started [info] Test org.apache.spark.mllib.feature.JavaTfIdfSuite.tfIdfMinimumDocumentFrequency started [info] Test org.apache.spark.mllib.feature.JavaTfIdfSuite.tfIdf started [info] Test run finished: 0 failed, 0 ignored, 2 total, 0.552s [info] Test run started [info] Test org.apache.spark.mllib.recommendation.JavaALSSuite.runALSUsingStaticMethods started [info] Test org.apache.spark.mllib.recommendation.JavaALSSuite.runImplicitALSUsingConstructor started [info] Test org.apache.spark.mllib.recommendation.JavaALSSuite.runRecommend started [info] Test org.apache.spark.mllib.recommendation.JavaALSSuite.runImplicitALSWithNegativeWeight started [info] Test org.apache.spark.mllib.recommendation.JavaALSSuite.runImplicitALSUsingStaticMethods started [info] Test org.apache.spark.mllib.recommendation.JavaALSSuite.runALSUsingConstructor started [info] Test run finished: 0 failed, 0 ignored, 6 total, 10.744s [info] Test run started [info] Test org.apache.spark.ml.classification.JavaMultilayerPerceptronClassifierSuite.testMLPC started [info] Test run finished: 0 failed, 0 ignored, 1 total, 1.088s [info] Test run started [info] Test org.apache.spark.mllib.classification.JavaSVMSuite.runSVMUsingConstructor started [info] Test org.apache.spark.mllib.classification.JavaSVMSuite.runSVMUsingStaticMethods started [info] Test run finished: 0 failed, 0 ignored, 2 total, 1.4s [info] Test run started [info] Test org.apache.spark.ml.feature.JavaNormalizerSuite.normalizer started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.219s [info] Test run started [info] Test org.apache.spark.mllib.linalg.JavaVectorsSuite.denseArrayConstruction started [info] Test org.apache.spark.mllib.linalg.JavaVectorsSuite.sparseArrayConstruction started [info] Test run finished: 0 failed, 0 ignored, 2 total, 0.002s [info] Test run started [info] Test org.apache.spark.mllib.regression.JavaStreamingLinearRegressionSuite.javaAPI started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.534s [info] Test run started [info] Test org.apache.spark.mllib.clustering.JavaStreamingKMeansSuite.javaAPI started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.34s [info] Test run started [info] Test org.apache.spark.mllib.tree.JavaDecisionTreeSuite.runDTUsingStaticMethods started [info] Test org.apache.spark.mllib.tree.JavaDecisionTreeSuite.runDTUsingConstructor started [info] Test run finished: 0 failed, 0 ignored, 2 total, 0.449s [info] Test run started [info] Test org.apache.spark.mllib.regression.JavaLinearRegressionSuite.testPredictJavaRDD started [info] Test org.apache.spark.mllib.regression.JavaLinearRegressionSuite.runLinearRegressionUsingStaticMethods started [info] Test org.apache.spark.mllib.regression.JavaLinearRegressionSuite.runLinearRegressionUsingConstructor started [info] Test run finished: 0 failed, 0 ignored, 3 total, 2.852s [info] Test run started [info] Test org.apache.spark.mllib.clustering.JavaKMeansSuite.testPredictJavaRDD started [info] Test org.apache.spark.mllib.clustering.JavaKMeansSuite.runKMeansUsingConstructor started [info] Test org.apache.spark.mllib.clustering.JavaKMeansSuite.runKMeansUsingStaticMethods started [info] Test run finished: 0 failed, 0 ignored, 3 total, 0.985s [info] Test run started [info] Test org.apache.spark.mllib.classification.JavaStreamingLogisticRegressionSuite.javaAPI started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.494s [info] Test run started [info] Test org.apache.spark.ml.clustering.JavaKMeansSuite.fitAndTransform started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.618s [info] Test run started [info] Test org.apache.spark.ml.feature.JavaVectorAssemblerSuite.testVectorAssembler started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.099s [info] Test run started [info] Test org.apache.spark.mllib.fpm.JavaPrefixSpanSuite.runPrefixSpan started [info] Test org.apache.spark.mllib.fpm.JavaPrefixSpanSuite.runPrefixSpanSaveLoad started [info] Test run finished: 0 failed, 0 ignored, 2 total, 1.144s [info] Test run started [info] Test org.apache.spark.mllib.fpm.JavaAssociationRulesSuite.runAssociationRules started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.059s [info] Test run started [info] Test org.apache.spark.mllib.regression.JavaLassoSuite.runLassoUsingConstructor started [info] Test org.apache.spark.mllib.regression.JavaLassoSuite.runLassoUsingStaticMethods started [info] Test run finished: 0 failed, 0 ignored, 2 total, 3.128s [info] Test run started [info] Test org.apache.spark.ml.linalg.JavaSQLDataTypesSuite.testSQLDataTypes started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.002s [info] Test run started [info] Test org.apache.spark.ml.classification.JavaDecisionTreeClassifierSuite.runDT started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.442s [info] Test run started [info] Test org.apache.spark.ml.feature.JavaStopWordsRemoverSuite.javaCompatibilityTest started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.11s [info] Test run started [info] Test org.apache.spark.ml.classification.JavaGBTClassifierSuite.runDT started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.569s [info] Test run started [info] Test org.apache.spark.ml.classification.JavaLogisticRegressionSuite.logisticRegressionWithSetters started [info] Test org.apache.spark.ml.classification.JavaLogisticRegressionSuite.logisticRegressionTrainingSummary started [info] Test org.apache.spark.ml.classification.JavaLogisticRegressionSuite.logisticRegressionPredictorClassifierMethods started [info] Test org.apache.spark.ml.classification.JavaLogisticRegressionSuite.logisticRegressionDefaultParams started [info] Test run finished: 0 failed, 0 ignored, 4 total, 2.687s [info] Test run started [info] Test org.apache.spark.ml.feature.JavaTokenizerSuite.regexTokenizer started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.18s [info] Test run started [info] Test org.apache.spark.mllib.clustering.JavaBisectingKMeansSuite.twoDimensionalData started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.249s [info] Test run started [info] Test org.apache.spark.ml.feature.JavaDCTSuite.javaCompatibilityTest started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.082s [info] Test run started [info] Test org.apache.spark.ml.feature.JavaStandardScalerSuite.standardScaler started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.215s [info] Test run started [info] Test org.apache.spark.ml.attribute.JavaAttributeGroupSuite.testAttributeGroup started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.001s [info] Test run started [info] Test org.apache.spark.ml.classification.JavaNaiveBayesSuite.testNaiveBayes started [info] Test org.apache.spark.ml.classification.JavaNaiveBayesSuite.naiveBayesDefaultParams started [info] Test run finished: 0 failed, 0 ignored, 2 total, 0.279s [info] Test run started [info] Test org.apache.spark.ml.source.libsvm.JavaLibSVMRelationSuite.verifyLibSVMDF started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.254s [info] Test run started [info] Test org.apache.spark.mllib.linalg.JavaMatricesSuite.zerosMatrixConstruction started [info] Test org.apache.spark.mllib.linalg.JavaMatricesSuite.identityMatrixConstruction started [info] Test org.apache.spark.mllib.linalg.JavaMatricesSuite.concatenateMatrices started [info] Test org.apache.spark.mllib.linalg.JavaMatricesSuite.sparseDenseConversion started [info] Test org.apache.spark.mllib.linalg.JavaMatricesSuite.randMatrixConstruction started [info] Test org.apache.spark.mllib.linalg.JavaMatricesSuite.diagonalMatrixConstruction started [info] Test run finished: 0 failed, 0 ignored, 6 total, 0.006s [info] Test run started [info] Test org.apache.spark.ml.feature.JavaPCASuite.testPCA started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.35s [info] Test run started [info] Test org.apache.spark.ml.classification.JavaRandomForestClassifierSuite.runDT started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.514s [info] Test run started [info] Test org.apache.spark.ml.regression.JavaGBTRegressorSuite.runDT started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.668s [info] Test run started [info] Test org.apache.spark.mllib.classification.JavaLogisticRegressionSuite.runLRUsingConstructor started [info] Test org.apache.spark.mllib.classification.JavaLogisticRegressionSuite.runLRUsingStaticMethods started [info] Test run finished: 0 failed, 0 ignored, 2 total, 5.239s [info] Test run started [info] Test org.apache.spark.ml.regression.JavaRandomForestRegressorSuite.runDT started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.331s [info] Test run started [info] Test org.apache.spark.mllib.random.JavaRandomRDDsSuite.testNormalVectorRDD started [info] Test org.apache.spark.mllib.random.JavaRandomRDDsSuite.testArbitrary started [info] Test org.apache.spark.mllib.random.JavaRandomRDDsSuite.testLogNormalVectorRDD started [info] Test org.apache.spark.mllib.random.JavaRandomRDDsSuite.testExponentialVectorRDD started [info] Test org.apache.spark.mllib.random.JavaRandomRDDsSuite.testUniformRDD started [info] Test org.apache.spark.mllib.random.JavaRandomRDDsSuite.testRandomVectorRDD started [info] Test org.apache.spark.mllib.random.JavaRandomRDDsSuite.testGammaRDD started [info] Test org.apache.spark.mllib.random.JavaRandomRDDsSuite.testUniformVectorRDD started [info] Test org.apache.spark.mllib.random.JavaRandomRDDsSuite.testPoissonRDD started [info] Test org.apache.spark.mllib.random.JavaRandomRDDsSuite.testNormalRDD started [info] Test org.apache.spark.mllib.random.JavaRandomRDDsSuite.testPoissonVectorRDD started [info] Test org.apache.spark.mllib.random.JavaRandomRDDsSuite.testGammaVectorRDD started [info] Test org.apache.spark.mllib.random.JavaRandomRDDsSuite.testExponentialRDD started [info] Test org.apache.spark.mllib.random.JavaRandomRDDsSuite.testLNormalRDD started [info] Test run finished: 0 failed, 0 ignored, 14 total, 1.689s [info] Test run started [info] Test org.apache.spark.mllib.util.JavaMLUtilsSuite.testConvertMatrixColumnsToAndFromML started [info] Test org.apache.spark.mllib.util.JavaMLUtilsSuite.testConvertVectorColumnsToAndFromML started [info] Test run finished: 0 failed, 0 ignored, 2 total, 0.373s [info] Test run started [info] Test org.apache.spark.ml.tuning.JavaCrossValidatorSuite.crossValidationWithLogisticRegression started [info] Test run finished: 0 failed, 0 ignored, 1 total, 6.959s [info] Test run started [info] Test org.apache.spark.mllib.feature.JavaWord2VecSuite.word2Vec started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.169s [info] Test run started [info] Test org.apache.spark.ml.util.JavaDefaultReadWriteSuite.testDefaultReadWrite started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.306s [info] Test run started [info] Test org.apache.spark.ml.classification.JavaOneVsRestSuite.oneVsRestDefaultParams started [info] Test run finished: 0 failed, 0 ignored, 1 total, 2.556s [info] Test run started [info] Test org.apache.spark.ml.feature.JavaHashingTFSuite.hashingTF started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.145s [info] Test run started [info] Test org.apache.spark.mllib.stat.JavaStatisticsSuite.testCorr started [info] Test org.apache.spark.mllib.stat.JavaStatisticsSuite.chiSqTest started [info] Test org.apache.spark.mllib.stat.JavaStatisticsSuite.streamingTest started [info] Test org.apache.spark.mllib.stat.JavaStatisticsSuite.kolmogorovSmirnovTest started [info] Test run finished: 0 failed, 0 ignored, 4 total, 0.706s [info] Test run started [info] Test org.apache.spark.mllib.regression.JavaIsotonicRegressionSuite.testIsotonicRegressionJavaRDD started [info] Test org.apache.spark.mllib.regression.JavaIsotonicRegressionSuite.testIsotonicRegressionPredictionsJavaRDD started [info] Test run finished: 0 failed, 0 ignored, 2 total, 0.237s [info] Test run started [info] Test org.apache.spark.ml.stat.JavaKolmogorovSmirnovTestSuite.testKSTestNamedDistribution started [info] Test org.apache.spark.ml.stat.JavaKolmogorovSmirnovTestSuite.testKSTestCDF started [info] Test run finished: 0 failed, 0 ignored, 2 total, 0.376s [info] Test run started [info] Test org.apache.spark.mllib.clustering.JavaLDASuite.onlineOptimizerCompatibility started [info] Test org.apache.spark.mllib.clustering.JavaLDASuite.distributedLDAModel started [info] Test org.apache.spark.mllib.clustering.JavaLDASuite.localLDAModel started [info] Test org.apache.spark.mllib.clustering.JavaLDASuite.localLdaMethods started [info] Test run finished: 0 failed, 0 ignored, 4 total, 1.1s [info] Test run started [info] Test org.apache.spark.ml.regression.JavaDecisionTreeRegressorSuite.runDT started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.3s [info] Test run started [info] Test org.apache.spark.ml.feature.JavaVectorIndexerSuite.vectorIndexerAPI started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.162s [info] Test run started [info] Test org.apache.spark.ml.regression.JavaLinearRegressionSuite.linearRegressionDefaultParams started [info] Test org.apache.spark.ml.regression.JavaLinearRegressionSuite.linearRegressionWithSetters started [info] Test run finished: 0 failed, 0 ignored, 2 total, 0.909s [info] Test run started [info] Test org.apache.spark.ml.feature.JavaPolynomialExpansionSuite.polynomialExpansionTest started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.098s [info] Test run started [info] Test org.apache.spark.ml.feature.JavaStringIndexerSuite.testStringIndexer started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.441s [info] Test run started [info] Test org.apache.spark.ml.feature.JavaVectorSlicerSuite.vectorSlice started [info] Test run finished: 0 failed, 0 ignored, 1 total, 0.096s [info] ScalaTest [info] Run completed in 29 minutes, 40 seconds. [info] Total number of tests run: 1622 [info] Suites: completed 204, aborted 0 [info] Tests: succeeded 1622, failed 0, canceled 0, ignored 7, pending 0 [info] All tests passed. [info] Passed: Total 1744, Failed 0, Errors 0, Passed 1744, Ignored 7 [success] Total time: 1804 s (30:04), completed Jan 17, 2021 9:42:57 AM [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list Traceback (most recent call last): File "", line 1, in File "/home/jenkins/workspace/SparkPullRequestBuilder/python/pyspark/sql/pandas/utils.py", line 57, in require_minimum_pyarrow_version "your version was %s." % (minimum_pyarrow_version, pyarrow.__version__)) ImportError: PyArrow >= 1.0.0 must be installed; however, your version was 0.15.1. [info] SQLQueryTestSuite: 09:43:02.775 WARN org.apache.spark.util.Utils: Your hostname, research-jenkins-worker-09 resolves to a loopback address: 127.0.1.1; using 192.168.10.31 instead (on interface enp4s0f0) 09:43:02.776 WARN org.apache.spark.util.Utils: Set SPARK_LOCAL_IP if you need to bind to another address 09:43:03.153 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 09:43:03.551 WARN org.apache.spark.util.Utils: Your hostname, research-jenkins-worker-09 resolves to a loopback address: 127.0.1.1; using 192.168.10.31 instead (on interface enp4s0f0) 09:43:03.552 WARN org.apache.spark.util.Utils: Set SPARK_LOCAL_IP if you need to bind to another address [info] SQLQuerySuite: 09:43:03.963 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable [info] - SPARK-8010: promote numeric to string (1 second, 848 milliseconds) [info] - show functions (1 second, 1 millisecond) [info] - describe functions (67 milliseconds) [info] - show-tblproperties.sql (668 milliseconds) [info] - parse-schema-string.sql (614 milliseconds) [info] - describe-table-after-alter-table.sql (283 milliseconds) [info] - SPARK-14415: All functions should have own descriptions (3 seconds, 310 milliseconds) [info] - SPARK-6743: no columns from cache (1 second, 600 milliseconds) [info] - self join with aliases (1 second, 131 milliseconds) [info] - decimalArithmeticOperations.sql (3 seconds, 934 milliseconds) [info] - support table.star (549 milliseconds) [info] - self join with alias in agg (1 second, 249 milliseconds) [info] - SPARK-8668 expr function (562 milliseconds) [info] - SPARK-4625 support SORT BY in SimpleSQLParser & DSL (240 milliseconds) [info] - SPARK-7158 collect and take return different results (712 milliseconds) [info] - grouping on nested fields (617 milliseconds) [info] - SPARK-6201 IN type conversion (188 milliseconds) [info] - SPARK-11226 Skip empty line in json file (244 milliseconds) [info] - SPARK-8828 sum should return null if all input values are null (210 milliseconds) [info] - datetime.sql (5 seconds, 894 milliseconds) [info] - sql-compatibility-functions.sql (1 second, 45 milliseconds) 09:43:25.433 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:25.434 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:25.465 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:25.466 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:25.552 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:25.553 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:25.577 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:25.577 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - null-propagation.sql (453 milliseconds) 09:43:26.431 WARN org.apache.spark.sql.execution.command.AlterTableRecoverPartitionsCommand: ignore file:/home/jenkins/workspace/SparkPullRequestBuilder/sql/core/spark-warehouse/org.apache.spark.sql.SQLQueryTestSuite/char_part/loc1 [info] - charvarchar.sql (839 milliseconds) [info] - cross-join.sql (1 second, 413 milliseconds) 09:43:31.772 WARN org.apache.spark.sql.catalyst.util.package: Truncated the string representation of a plan since it was too large. This behavior can be adjusted by setting 'spark.sql.debug.maxToStringFields'. [info] - aggregation with codegen (9 seconds, 918 milliseconds) [info] - Add Parser of SQL COALESCE() (338 milliseconds) [info] - SPARK-3176 Added Parser of SQL LAST() (228 milliseconds) 09:43:33.138 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:33.161 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:33.162 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - SPARK-2041 column name equals tablename (145 milliseconds) [info] - SQRT (112 milliseconds) 09:43:33.340 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:33.363 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:33.364 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - SQRT with automatic string casts (114 milliseconds) 09:43:33.708 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:33.742 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:33.742 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - SPARK-2407 Added Parser of SQL SUBSTR() (393 milliseconds) 09:43:33.882 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:33.905 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:33.905 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:34.016 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:34.038 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:34.039 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:34.174 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:34.197 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:34.197 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:34.349 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:34.373 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:34.373 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:34.479 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:34.498 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:34.499 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:34.606 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:34.625 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:34.625 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - SPARK-3173 Timestamp support in the parser (919 milliseconds) 09:43:34.743 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:34.760 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:34.761 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:34.858 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:34.877 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:34.877 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - left semi greater than predicate (247 milliseconds) 09:43:35.383 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:35.409 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:35.409 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - left semi greater than predicate and equal operator (547 milliseconds) 09:43:35.665 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - select * (138 milliseconds) 09:43:35.689 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:35.689 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - simple select (110 milliseconds) 09:43:35.868 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:35.892 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:35.893 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:36.087 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:36.113 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:36.114 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:36.282 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:36.307 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:36.308 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:36.484 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:36.506 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:36.507 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:36.646 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:36.669 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:36.669 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:36.784 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:36.803 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:36.803 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:36.922 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:36.940 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:36.940 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - external sorting (3 seconds, 300 milliseconds) [info] - CTE feature (384 milliseconds) [info] - Allow only a single WITH clause per query (2 milliseconds) [info] - date row (166 milliseconds) [info] - from follow multiple brackets (594 milliseconds) [info] - average (157 milliseconds) [info] - average overflow (325 milliseconds) [info] - count (220 milliseconds) [info] - count distinct (389 milliseconds) 09:43:41.337 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:41.360 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:41.360 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:41.363 WARN org.apache.spark.sql.catalyst.util.package: Truncated the string representation of a plan since it was too large. This behavior can be adjusted by setting 'spark.sql.debug.maxToStringFields'. 09:43:41.476 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:41.503 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:41.503 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - approximate count distinct (451 milliseconds) 09:43:41.802 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:41.831 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:41.831 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:41.949 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:41.966 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:41.967 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:42.076 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:42.095 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:42.096 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:42.204 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:42.221 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:42.221 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - approximate count distinct with user provided standard deviation (448 milliseconds) 09:43:42.333 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:42.350 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:42.351 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:42.429 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:42.446 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:42.446 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:42.524 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:42.541 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:42.541 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:42.622 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:42.639 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:42.639 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:42.719 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:42.737 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:42.737 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - null count (873 milliseconds) 09:43:43.142 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:43.165 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:43.165 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - count of empty table (112 milliseconds) 09:43:43.355 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:43.371 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:43.371 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - inner join where, one match per row (253 milliseconds) 09:43:43.479 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:43.497 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:43.497 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:43.620 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:43.636 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:43.636 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - inner join ON, one match per row (239 milliseconds) 09:43:43.748 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:43.764 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:43.764 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:43.866 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:43.883 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:43.884 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - inner join, where, multiple matches (274 milliseconds) 09:43:43.992 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:44.008 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:44.009 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:44.100 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:44.145 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:44.145 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - inner join, no matches (221 milliseconds) 09:43:44.244 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:44.263 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:44.264 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - big inner join, 4 matches per row (683 milliseconds) [info] - cartesian product join (164 milliseconds) [info] - left outer join (259 milliseconds) [info] - right outer join (260 milliseconds) [info] - full outer join (544 milliseconds) [info] - SPARK-11111 null-safe join should not use cartesian product (450 milliseconds) [info] - SPARK-3349 partitioning after limit (999 milliseconds) 09:43:47.583 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:47.600 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:47.601 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:47.704 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:47.723 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:47.723 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - mixed-case keywords (394 milliseconds) 09:43:47.964 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:47.981 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:47.982 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - select with table name as qualifier (109 milliseconds) 09:43:48.074 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:48.096 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:48.096 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:48.186 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:48.204 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:48.205 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:48.296 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:48.353 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:48.353 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:48.442 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:48.456 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:48.457 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - inner join ON with table name as qualifier (456 milliseconds) 09:43:48.561 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:48.577 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:48.577 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:48.659 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:48.675 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:48.676 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:48.759 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - qualified select with inner join ON with table name as qualifier (237 milliseconds) 09:43:48.776 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:48.776 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:48.856 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:48.872 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:48.873 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - system function upper() (205 milliseconds) [info] - system function lower() (199 milliseconds) 09:43:49.249 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:49.266 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:49.266 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:49.370 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:49.385 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:49.386 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:49.483 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:49.499 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:49.499 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:49.595 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:49.611 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:49.611 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:49.710 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:49.726 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:49.726 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:49.834 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:49.851 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:49.851 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:49.949 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:49.965 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:49.965 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:50.061 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:50.077 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:50.077 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:50.173 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:50.187 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:43:50.188 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - window.sql (22 seconds, 375 milliseconds) [info] - UNION (1 second, 125 milliseconds) [info] - UNION with column mismatches (1 second, 73 milliseconds) [info] - EXCEPT (1 second, 292 milliseconds) [info] - MINUS (1 second, 421 milliseconds) [info] - datetime-legacy.sql (3 seconds, 868 milliseconds) [info] - INTERSECT (742 milliseconds) [info] - SET commands semantics using sql() (234 milliseconds) [info] - SPARK-19218 SET command should show a result in a sorted order (467 milliseconds) [info] - SPARK-19218 `SET -v` should not fail with null value configuration (25 milliseconds) 09:43:55.565 WARN org.apache.spark.sql.execution.command.SetCommand: Property mapred.reduce.tasks is deprecated, automatically converted to spark.sql.shuffle.partitions instead. 09:43:55.571 WARN org.apache.spark.sql.execution.command.SetCommand: Property mapred.reduce.tasks is deprecated, automatically converted to spark.sql.shuffle.partitions instead. 09:43:55.577 WARN org.apache.spark.sql.execution.command.SetCommand: Property mapred.reduce.tasks is deprecated, automatically converted to spark.sql.shuffle.partitions instead. [info] - SET commands with illegal or inappropriate argument (20 milliseconds) 09:43:55.584 WARN org.apache.spark.sql.execution.command.SetCommand: Property mapreduce.job.reduces is Hadoop's property, automatically converted to spark.sql.shuffle.partitions instead. 09:43:55.590 WARN org.apache.spark.sql.execution.command.SetCommand: Property mapreduce.job.reduces is Hadoop's property, automatically converted to spark.sql.shuffle.partitions instead. [info] - SET mapreduce.job.reduces automatically converted to spark.sql.shuffle.partitions (12 milliseconds) [info] - apply schema (675 milliseconds) [info] - SPARK-3423 BETWEEN (292 milliseconds) [info] - SPARK-17863: SELECT distinct does not work correctly if order by missing attribute (549 milliseconds) [info] - cast boolean to string (137 milliseconds) [info] - metadata is propagated correctly (63 milliseconds) [info] - SPARK-3371 Renaming a function expression with group by gives error (320 milliseconds) [info] - SPARK-3813 CASE a WHEN b THEN c [WHEN d THEN e]* [ELSE f] END (280 milliseconds) [info] - SPARK-3813 CASE WHEN a THEN b [WHEN c THEN d]* [ELSE e] END (273 milliseconds) [info] - SPARK-16748: SparkExceptions during planning should not wrapped in TreeNodeException (161 milliseconds) [info] - Multiple join (349 milliseconds) [info] - SPARK-3483 Special chars in column names (78 milliseconds) [info] - SPARK-3814 Support Bitwise & operator (108 milliseconds) [info] - SPARK-3814 Support Bitwise | operator (104 milliseconds) [info] - SPARK-3814 Support Bitwise ^ operator (102 milliseconds) [info] - SPARK-3814 Support Bitwise ~ operator (102 milliseconds) [info] - SPARK-4120 Join of multiple tables does not work in SparkSQL (282 milliseconds) [info] - SPARK-4154 Query does not work if it has 'not between' in Spark SQL and HQL (330 milliseconds) [info] - SPARK-4207 Query which has syntax like 'not like' is not working in Spark SQL (318 milliseconds) [info] - SPARK-4322 Grouping field with struct field as sub expression (635 milliseconds) [info] - intersect-all.sql (6 seconds, 806 milliseconds) [info] - SPARK-4432 Fix attribute reference resolution error when using ORDER BY (313 milliseconds) [info] - order by asc by default when not specify ascending and descending (286 milliseconds) [info] - Supporting relational operator '<=>' in Spark SQL (337 milliseconds) [info] - Multi-column COUNT(DISTINCT ...) (364 milliseconds) [info] - SPARK-4699 case sensitivity SQL query (131 milliseconds) [info] - group-by-ordinal.sql (2 seconds, 621 milliseconds) 09:44:03.725 WARN org.apache.spark.util.HadoopFSUtils: The directory file:/path/to/table was not found. Was it deleted very recently? 09:44:03.746 WARN org.apache.spark.util.HadoopFSUtils: The directory file:/path/to/table was not found. Was it deleted very recently? 09:44:03.760 WARN org.apache.spark.util.HadoopFSUtils: The directory file:/path/to/table was not found. Was it deleted very recently? 09:44:03.776 WARN org.apache.spark.util.HadoopFSUtils: The directory file:/path/to/table was not found. Was it deleted very recently? [info] - SPARK-6145: ORDER BY test for nested fields (1 second, 761 milliseconds) [info] - show-create-table.sql (466 milliseconds) [info] - SPARK-6145: special cases (520 milliseconds) [info] - SPARK-6898: complete support for special chars in column names (164 milliseconds) [info] - extract.sql (2 seconds, 833 milliseconds) [info] - cte-nested.sql (494 milliseconds) [info] - SPARK-6583 order by aggregated function (4 seconds, 661 milliseconds) [info] - count.sql (2 seconds, 67 milliseconds) [info] - SPARK-7952: fix the equality check between boolean and numeric types (220 milliseconds) [info] - comments.sql (288 milliseconds) 09:44:09.849 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored: [info] - SPARK-7067: order by queries for complex ExtractValue chain (396 milliseconds) [info] - SPARK-8782: ORDER BY NULL (90 milliseconds) [info] - SPARK-8837: use keyword in column name (181 milliseconds) [info] - SPARK-8753: add interval type (143 milliseconds) [info] - SPARK-8945: add and subtract expressions for interval type (282 milliseconds) [info] - aggregation with codegen updates peak execution memory (358 milliseconds) [info] - describe-table-column.sql (1 second, 347 milliseconds) [info] - SPARK-10215 Div of Decimal returns null (343 milliseconds) [info] - precision smaller than scale (555 milliseconds) [info] - external sorting updates peak execution memory (169 milliseconds) [info] - SPARK-9511: error with table starting with number (188 milliseconds) [info] - specifying database name for a temporary view is not allowed (1 second, 501 milliseconds) [info] - SPARK-10130 type coercion for IF should have children resolved first (82 milliseconds) [info] - SPARK-10389: order by non-attribute grouping expression on Aggregate (1 second, 85 milliseconds) [info] - outer-join.sql (4 seconds, 477 milliseconds) [info] - datetime-parsing-invalid.sql (381 milliseconds) [info] - inline-table.sql (125 milliseconds) [info] - struct.sql (323 milliseconds) [info] - SPARK-23281: verify the correctness of sort direction on composite order by clause (1 second, 756 milliseconds) [info] - run sql directly on files (859 milliseconds) [info] - SortMergeJoin returns wrong results when using UnsafeRows (897 milliseconds) [info] - SPARK-11303: filter should not be pushed down into sample (511 milliseconds) [info] - interval.sql (2 seconds, 474 milliseconds) [info] - Struct Star Expansion (3 seconds, 717 milliseconds) [info] - Struct Star Expansion - Name conflict (260 milliseconds) [info] - Star Expansion - group by (513 milliseconds) [info] - Star Expansion - table with zero column (287 milliseconds) [info] - group-analytics.sql (6 seconds, 138 milliseconds) [info] - describe-query.sql (201 milliseconds) [info] - Common subexpression elimination (1 second, 887 milliseconds) [info] - SPARK-10707: nullability should be correctly propagated through set operations (1) (175 milliseconds) [info] - SPARK-10707: nullability should be correctly propagated through set operations (2) (183 milliseconds) [info] - filter on a grouping column that is not presented in SELECT (275 milliseconds) [info] - SPARK-13056: Null in map value causes NPE (148 milliseconds) [info] - hash function (83 milliseconds) [info] - SPARK-27619: Throw analysis exception when hash and xxhash64 is used on MapType (30 milliseconds) [info] - cte-nonlegacy.sql (1 second, 116 milliseconds) [info] - SPARK-27619: When spark.sql.legacy.allowHashOnMapType is true, hash can be used on Maptype (196 milliseconds) [info] - xxhash64 function (77 milliseconds) 09:44:26.916 WARN org.apache.spark.sql.execution.command.DropTableCommand: org.apache.spark.sql.AnalysisException: Table or view not found: t; line 1 pos 14; 'SubqueryAlias temp_v +- View (`temp_v`, [a#20823,b#20824,c#20825,d#20826]) +- 'Project [*] +- 'UnresolvedRelation [t], [], false org.apache.spark.sql.AnalysisException: Table or view not found: t; line 1 pos 14; 'SubqueryAlias temp_v +- View (`temp_v`, [a#20823,b#20824,c#20825,d#20826]) +- 'Project [*] +- 'UnresolvedRelation [t], [], false at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis$1(CheckAnalysis.scala:122) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis$1$adapted(CheckAnalysis.scala:94) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:183) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1(TreeNode.scala:182) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1$adapted(TreeNode.scala:182) at scala.collection.immutable.List.foreach(List.scala:392) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:182) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1(TreeNode.scala:182) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1$adapted(TreeNode.scala:182) at scala.collection.immutable.List.foreach(List.scala:392) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:182) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1(TreeNode.scala:182) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1$adapted(TreeNode.scala:182) at scala.collection.immutable.List.foreach(List.scala:392) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:182) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis(CheckAnalysis.scala:94) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis$(CheckAnalysis.scala:91) at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:154) at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:175) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:228) at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:172) at org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:73) at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111) at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:143) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772) at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:143) at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:73) at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:71) at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:63) at org.apache.spark.sql.Dataset$.$anonfun$ofRows$1(Dataset.scala:90) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:88) at org.apache.spark.sql.SparkSession.table(SparkSession.scala:597) at org.apache.spark.sql.execution.command.DropTableCommand.run(ddl.scala:241) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68) at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79) at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:228) at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3699) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3697) at org.apache.spark.sql.Dataset.(Dataset.scala:228) at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:96) at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:615) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:610) at org.apache.spark.sql.SQLQueryTestSuite.getNormalizedResult(SQLQueryTestSuite.scala:513) at org.apache.spark.sql.SQLQueryTestSuite.$anonfun$runQueries$8(SQLQueryTestSuite.scala:394) at org.apache.spark.sql.SQLQueryTestSuite.handleExceptions(SQLQueryTestSuite.scala:480) at org.apache.spark.sql.SQLQueryTestSuite.$anonfun$runQueries$7(SQLQueryTestSuite.scala:394) at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238) at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at scala.collection.TraversableLike.map(TraversableLike.scala:238) at scala.collection.TraversableLike.map$(TraversableLike.scala:231) at scala.collection.AbstractTraversable.map(Traversable.scala:108) at org.apache.spark.sql.SQLQueryTestSuite.runQueries(SQLQueryTestSuite.scala:393) at org.apache.spark.sql.SQLQueryTestSuite.$anonfun$runTest$34(SQLQueryTestSuite.scala:345) at org.apache.spark.sql.SQLQueryTestSuite.$anonfun$runTest$34$adapted(SQLQueryTestSuite.scala:343) at scala.collection.immutable.List.foreach(List.scala:392) at org.apache.spark.sql.SQLQueryTestSuite.runTest(SQLQueryTestSuite.scala:343) at org.apache.spark.sql.SQLQueryTestSuite.$anonfun$createScalaTestCase$5(SQLQueryTestSuite.scala:247) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85) at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83) at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104) at org.scalatest.Transformer.apply(Transformer.scala:22) at org.scalatest.Transformer.apply(Transformer.scala:20) at org.scalatest.funsuite.AnyFunSuiteLike$$anon$1.apply(AnyFunSuiteLike.scala:190) at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:176) at org.scalatest.funsuite.AnyFunSuiteLike.invokeWithFixture$1(AnyFunSuiteLike.scala:188) at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTest$1(AnyFunSuiteLike.scala:200) at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306) at org.scalatest.funsuite.AnyFunSuiteLike.runTest(AnyFunSuiteLike.scala:200) at org.scalatest.funsuite.AnyFunSuiteLike.runTest$(AnyFunSuiteLike.scala:182) at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:61) at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:234) at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:227) at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:61) at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTests$1(AnyFunSuiteLike.scala:233) at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:413) at scala.collection.immutable.List.foreach(List.scala:392) at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401) at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:396) at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:475) at org.scalatest.funsuite.AnyFunSuiteLike.runTests(AnyFunSuiteLike.scala:233) at org.scalatest.funsuite.AnyFunSuiteLike.runTests$(AnyFunSuiteLike.scala:232) at org.scalatest.funsuite.AnyFunSuite.runTests(AnyFunSuite.scala:1563) at org.scalatest.Suite.run(Suite.scala:1112) at org.scalatest.Suite.run$(Suite.scala:1094) at org.scalatest.funsuite.AnyFunSuite.org$scalatest$funsuite$AnyFunSuiteLike$$super$run(AnyFunSuite.scala:1563) at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$run$1(AnyFunSuiteLike.scala:237) at org.scalatest.SuperEngine.runImpl(Engine.scala:535) at org.scalatest.funsuite.AnyFunSuiteLike.run(AnyFunSuiteLike.scala:237) at org.scalatest.funsuite.AnyFunSuiteLike.run$(AnyFunSuiteLike.scala:236) at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:61) at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213) at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210) at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208) at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:61) at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:318) at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:513) at sbt.ForkMain$Run.lambda$runTest$1(ForkMain.java:413) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:44:26.962 WARN org.apache.spark.sql.execution.command.DropTableCommand: org.apache.spark.sql.AnalysisException: Table or view not found: t; line 1 pos 14; 'SubqueryAlias spark_catalog.default.v +- View (`default`.`v`, [a#20843,b#20844,c#20845,d#20846]) +- 'Project [*] +- 'UnresolvedRelation [t], [], false org.apache.spark.sql.AnalysisException: Table or view not found: t; line 1 pos 14; 'SubqueryAlias spark_catalog.default.v +- View (`default`.`v`, [a#20843,b#20844,c#20845,d#20846]) +- 'Project [*] +- 'UnresolvedRelation [t], [], false at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis$1(CheckAnalysis.scala:122) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis$1$adapted(CheckAnalysis.scala:94) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:183) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1(TreeNode.scala:182) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1$adapted(TreeNode.scala:182) at scala.collection.immutable.List.foreach(List.scala:392) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:182) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1(TreeNode.scala:182) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1$adapted(TreeNode.scala:182) at scala.collection.immutable.List.foreach(List.scala:392) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:182) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1(TreeNode.scala:182) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1$adapted(TreeNode.scala:182) at scala.collection.immutable.List.foreach(List.scala:392) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:182) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis(CheckAnalysis.scala:94) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis$(CheckAnalysis.scala:91) at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:154) at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:175) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:228) at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:172) at org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:73) at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111) at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:143) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772) at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:143) at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:73) at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:71) at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:63) at org.apache.spark.sql.Dataset$.$anonfun$ofRows$1(Dataset.scala:90) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:88) at org.apache.spark.sql.SparkSession.table(SparkSession.scala:597) at org.apache.spark.sql.execution.command.DropTableCommand.run(ddl.scala:241) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68) at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79) at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:228) at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3699) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3697) at org.apache.spark.sql.Dataset.(Dataset.scala:228) at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:96) at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:615) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:610) at org.apache.spark.sql.SQLQueryTestSuite.getNormalizedResult(SQLQueryTestSuite.scala:513) at org.apache.spark.sql.SQLQueryTestSuite.$anonfun$runQueries$8(SQLQueryTestSuite.scala:394) at org.apache.spark.sql.SQLQueryTestSuite.handleExceptions(SQLQueryTestSuite.scala:480) at org.apache.spark.sql.SQLQueryTestSuite.$anonfun$runQueries$7(SQLQueryTestSuite.scala:394) at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238) at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at scala.collection.TraversableLike.map(TraversableLike.scala:238) at scala.collection.TraversableLike.map$(TraversableLike.scala:231) at scala.collection.AbstractTraversable.map(Traversable.scala:108) at org.apache.spark.sql.SQLQueryTestSuite.runQueries(SQLQueryTestSuite.scala:393) at org.apache.spark.sql.SQLQueryTestSuite.$anonfun$runTest$34(SQLQueryTestSuite.scala:345) at org.apache.spark.sql.SQLQueryTestSuite.$anonfun$runTest$34$adapted(SQLQueryTestSuite.scala:343) at scala.collection.immutable.List.foreach(List.scala:392) at org.apache.spark.sql.SQLQueryTestSuite.runTest(SQLQueryTestSuite.scala:343) at org.apache.spark.sql.SQLQueryTestSuite.$anonfun$createScalaTestCase$5(SQLQueryTestSuite.scala:247) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85) at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83) at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104) at org.scalatest.Transformer.apply(Transformer.scala:22) at org.scalatest.Transformer.apply(Transformer.scala:20) at org.scalatest.funsuite.AnyFunSuiteLike$$anon$1.apply(AnyFunSuiteLike.scala:190) at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:176) at org.scalatest.funsuite.AnyFunSuiteLike.invokeWithFixture$1(AnyFunSuiteLike.scala:188) at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTest$1(AnyFunSuiteLike.scala:200) at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306) at org.scalatest.funsuite.AnyFunSuiteLike.runTest(AnyFunSuiteLike.scala:200) at org.scalatest.funsuite.AnyFunSuiteLike.runTest$(AnyFunSuiteLike.scala:182) at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:61) at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:234) at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:227) at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:61) at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTests$1(AnyFunSuiteLike.scala:233) at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:413) at scala.collection.immutable.List.foreach(List.scala:392) at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401) at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:396) at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:475) at org.scalatest.funsuite.AnyFunSuiteLike.runTests(AnyFunSuiteLike.scala:233) at org.scalatest.funsuite.AnyFunSuiteLike.runTests$(AnyFunSuiteLike.scala:232) at org.scalatest.funsuite.AnyFunSuite.runTests(AnyFunSuite.scala:1563) at org.scalatest.Suite.run(Suite.scala:1112) at org.scalatest.Suite.run$(Suite.scala:1094) at org.scalatest.funsuite.AnyFunSuite.org$scalatest$funsuite$AnyFunSuiteLike$$super$run(AnyFunSuite.scala:1563) at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$run$1(AnyFunSuiteLike.scala:237) at org.scalatest.SuperEngine.runImpl(Engine.scala:535) at org.scalatest.funsuite.AnyFunSuiteLike.run(AnyFunSuiteLike.scala:237) at org.scalatest.funsuite.AnyFunSuiteLike.run$(AnyFunSuiteLike.scala:236) at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:61) at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213) at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210) at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208) at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:61) at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:318) at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:513) at sbt.ForkMain$Run.lambda$runTest$1(ForkMain.java:413) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) [info] - describe.sql (464 milliseconds) [info] - udaf.sql (521 milliseconds) [info] - pred-pushdown.sql (204 milliseconds) [info] - csv-functions.sql (278 milliseconds) [info] - datetime-parsing-legacy.sql (677 milliseconds) 09:44:28.817 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored: [info] - join with using clause (2 seconds, 59 milliseconds) [info] - show-tables.sql (281 milliseconds) 09:44:29.090 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored: [info] - show-views.sql (159 milliseconds) [info] - SPARK-15327: fail to compile generated code with complex data structure (749 milliseconds) [info] - join-empty-relation.sql (748 milliseconds) [info] - timezone.sql (41 milliseconds) 09:44:30.264 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 1484.0 (TID 1670) java.lang.RuntimeException: 'false' is not true! at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:44:30.286 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 1484.0 (TID 1670) (192.168.10.31 executor driver): java.lang.RuntimeException: 'false' is not true! at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:44:30.288 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 1484.0 failed 1 times; aborting job 09:44:30.327 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 1485.0 (TID 1671) java.lang.RuntimeException: 'cast(0 as boolean)' is not true! at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:44:30.329 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 1485.0 (TID 1671) (192.168.10.31 executor driver): java.lang.RuntimeException: 'cast(0 as boolean)' is not true! at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:44:30.329 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 1485.0 failed 1 times; aborting job 09:44:30.362 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 1486.0 (TID 1672) java.lang.RuntimeException: 'null' is not true! at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:44:30.364 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 1486.0 (TID 1672) (192.168.10.31 executor driver): java.lang.RuntimeException: 'null' is not true! at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:44:30.364 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 1486.0 failed 1 times; aborting job [info] - data source table created in InMemoryCatalog should be able to read/write (788 milliseconds) 09:44:30.403 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 1487.0 (TID 1673) java.lang.RuntimeException: 'cast(null as boolean)' is not true! at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:44:30.406 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 1487.0 (TID 1673) (192.168.10.31 executor driver): java.lang.RuntimeException: 'cast(null as boolean)' is not true! at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:44:30.406 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 1487.0 failed 1 times; aborting job [info] - Eliminate noop ordinal ORDER BY (44 milliseconds) 09:44:30.442 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 1488.0 (TID 1674) java.lang.RuntimeException: custom error message at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:44:30.443 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 1488.0 (TID 1674) (192.168.10.31 executor driver): java.lang.RuntimeException: custom error message at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:44:30.444 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 1488.0 failed 1 times; aborting job 09:44:30.486 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 1489.0 (TID 1675) java.lang.RuntimeException: error message at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:44:30.488 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 1489.0 (TID 1675) (192.168.10.31 executor driver): java.lang.RuntimeException: error message at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:44:30.488 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 1489.0 failed 1 times; aborting job 09:44:30.572 ERROR org.apache.spark.executor.Executor: Exception in task 1.0 in stage 1490.0 (TID 1677) java.lang.RuntimeException: too big: 8 at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:44:30.574 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 1490.0 (TID 1677) (192.168.10.31 executor driver): java.lang.RuntimeException: too big: 8 at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:44:30.574 ERROR org.apache.spark.scheduler.TaskSetManager: Task 1 in stage 1490.0 failed 1 times; aborting job [info] - misc-functions.sql (594 milliseconds) [info] - check code injection is prevented (1 second, 313 milliseconds) 09:44:31.741 WARN org.apache.spark.sql.internal.WithTestConf$$anon$4: The SQL config 'spark.sql.optimizer.metadataOnly' has been deprecated in Spark v3.0 and may be removed in the future. Avoid to depend on this optimization to prevent a potential correctness issue. If you must use, use 'SparkSessionExtensions' instead to inject it as a custom rule. 09:44:32.089 WARN org.apache.spark.sql.execution.OptimizeMetadataOnlyQuery: Since configuration `spark.sql.optimizer.metadataOnly` is enabled, Spark will scan partition-level metadata without scanning data files. This could result in wrong results when the partition metadata exists but the inclusive data files are empty. 09:44:32.125 WARN org.apache.spark.sql.execution.OptimizeMetadataOnlyQuery: Since configuration `spark.sql.optimizer.metadataOnly` is enabled, Spark will scan partition-level metadata without scanning data files. This could result in wrong results when the partition metadata exists but the inclusive data files are empty. [info] - transform.sql (1 second, 620 milliseconds) 09:44:32.245 WARN org.apache.spark.sql.internal.SQLConf: The SQL config 'spark.sql.optimizer.metadataOnly' has been deprecated in Spark v3.0 and may be removed in the future. Avoid to depend on this optimization to prevent a potential correctness issue. If you must use, use 'SparkSessionExtensions' instead to inject it as a custom rule. 09:44:32.245 WARN org.apache.spark.sql.internal.SQLConf: The SQL config 'spark.sql.optimizer.metadataOnly' has been deprecated in Spark v3.0 and may be removed in the future. Avoid to depend on this optimization to prevent a potential correctness issue. If you must use, use 'SparkSessionExtensions' instead to inject it as a custom rule. 09:44:32.461 WARN org.apache.spark.sql.execution.OptimizeMetadataOnlyQuery: Since configuration `spark.sql.optimizer.metadataOnly` is enabled, Spark will scan partition-level metadata without scanning data files. This could result in wrong results when the partition metadata exists but the inclusive data files are empty. 09:44:32.498 WARN org.apache.spark.sql.execution.OptimizeMetadataOnlyQuery: Since configuration `spark.sql.optimizer.metadataOnly` is enabled, Spark will scan partition-level metadata without scanning data files. This could result in wrong results when the partition metadata exists but the inclusive data files are empty. 09:44:32.582 WARN org.apache.spark.sql.internal.SQLConf: The SQL config 'spark.sql.optimizer.metadataOnly' has been deprecated in Spark v3.0 and may be removed in the future. Avoid to depend on this optimization to prevent a potential correctness issue. If you must use, use 'SparkSessionExtensions' instead to inject it as a custom rule. 09:44:32.739 WARN org.apache.spark.sql.execution.OptimizeMetadataOnlyQuery: Since configuration `spark.sql.optimizer.metadataOnly` is enabled, Spark will scan partition-level metadata without scanning data files. This could result in wrong results when the partition metadata exists but the inclusive data files are empty. 09:44:32.764 WARN org.apache.spark.sql.execution.OptimizeMetadataOnlyQuery: Since configuration `spark.sql.optimizer.metadataOnly` is enabled, Spark will scan partition-level metadata without scanning data files. This could result in wrong results when the partition metadata exists but the inclusive data files are empty. 09:44:32.954 WARN org.apache.spark.sql.internal.SQLConf: The SQL config 'spark.sql.optimizer.metadataOnly' has been deprecated in Spark v3.0 and may be removed in the future. Avoid to depend on this optimization to prevent a potential correctness issue. If you must use, use 'SparkSessionExtensions' instead to inject it as a custom rule. 09:44:32.954 WARN org.apache.spark.sql.internal.SQLConf: The SQL config 'spark.sql.optimizer.metadataOnly' has been deprecated in Spark v3.0 and may be removed in the future. Avoid to depend on this optimization to prevent a potential correctness issue. If you must use, use 'SparkSessionExtensions' instead to inject it as a custom rule. [info] - limit.sql (769 milliseconds) 09:44:33.199 WARN org.apache.spark.sql.execution.OptimizeMetadataOnlyQuery: Since configuration `spark.sql.optimizer.metadataOnly` is enabled, Spark will scan partition-level metadata without scanning data files. This could result in wrong results when the partition metadata exists but the inclusive data files are empty. 09:44:33.229 WARN org.apache.spark.sql.execution.OptimizeMetadataOnlyQuery: Since configuration `spark.sql.optimizer.metadataOnly` is enabled, Spark will scan partition-level metadata without scanning data files. This could result in wrong results when the partition metadata exists but the inclusive data files are empty. 09:44:33.366 WARN org.apache.spark.sql.internal.SQLConf: The SQL config 'spark.sql.optimizer.metadataOnly' has been deprecated in Spark v3.0 and may be removed in the future. Avoid to depend on this optimization to prevent a potential correctness issue. If you must use, use 'SparkSessionExtensions' instead to inject it as a custom rule. 09:44:33.567 WARN org.apache.spark.sql.execution.OptimizeMetadataOnlyQuery: Since configuration `spark.sql.optimizer.metadataOnly` is enabled, Spark will scan partition-level metadata without scanning data files. This could result in wrong results when the partition metadata exists but the inclusive data files are empty. 09:44:33.586 WARN org.apache.spark.sql.execution.OptimizeMetadataOnlyQuery: Since configuration `spark.sql.optimizer.metadataOnly` is enabled, Spark will scan partition-level metadata without scanning data files. This could result in wrong results when the partition metadata exists but the inclusive data files are empty. 09:44:33.658 WARN org.apache.spark.sql.internal.SQLConf: The SQL config 'spark.sql.optimizer.metadataOnly' has been deprecated in Spark v3.0 and may be removed in the future. Avoid to depend on this optimization to prevent a potential correctness issue. If you must use, use 'SparkSessionExtensions' instead to inject it as a custom rule. 09:44:33.658 WARN org.apache.spark.sql.internal.SQLConf: The SQL config 'spark.sql.optimizer.metadataOnly' has been deprecated in Spark v3.0 and may be removed in the future. Avoid to depend on this optimization to prevent a potential correctness issue. If you must use, use 'SparkSessionExtensions' instead to inject it as a custom rule. 09:44:33.822 WARN org.apache.spark.sql.execution.OptimizeMetadataOnlyQuery: Since configuration `spark.sql.optimizer.metadataOnly` is enabled, Spark will scan partition-level metadata without scanning data files. This could result in wrong results when the partition metadata exists but the inclusive data files are empty. 09:44:33.848 WARN org.apache.spark.sql.execution.OptimizeMetadataOnlyQuery: Since configuration `spark.sql.optimizer.metadataOnly` is enabled, Spark will scan partition-level metadata without scanning data files. This could result in wrong results when the partition metadata exists but the inclusive data files are empty. 09:44:33.912 WARN org.apache.spark.sql.internal.SQLConf: The SQL config 'spark.sql.optimizer.metadataOnly' has been deprecated in Spark v3.0 and may be removed in the future. Avoid to depend on this optimization to prevent a potential correctness issue. If you must use, use 'SparkSessionExtensions' instead to inject it as a custom rule. 09:44:34.054 WARN org.apache.spark.sql.execution.OptimizeMetadataOnlyQuery: Since configuration `spark.sql.optimizer.metadataOnly` is enabled, Spark will scan partition-level metadata without scanning data files. This could result in wrong results when the partition metadata exists but the inclusive data files are empty. 09:44:34.083 WARN org.apache.spark.sql.execution.OptimizeMetadataOnlyQuery: Since configuration `spark.sql.optimizer.metadataOnly` is enabled, Spark will scan partition-level metadata without scanning data files. This could result in wrong results when the partition metadata exists but the inclusive data files are empty. 09:44:34.210 WARN org.apache.spark.sql.internal.SQLConf: The SQL config 'spark.sql.optimizer.metadataOnly' has been deprecated in Spark v3.0 and may be removed in the future. Avoid to depend on this optimization to prevent a potential correctness issue. If you must use, use 'SparkSessionExtensions' instead to inject it as a custom rule. 09:44:34.350 WARN org.apache.spark.sql.execution.OptimizeMetadataOnlyQuery: Since configuration `spark.sql.optimizer.metadataOnly` is enabled, Spark will scan partition-level metadata without scanning data files. This could result in wrong results when the partition metadata exists but the inclusive data files are empty. 09:44:34.372 WARN org.apache.spark.sql.execution.OptimizeMetadataOnlyQuery: Since configuration `spark.sql.optimizer.metadataOnly` is enabled, Spark will scan partition-level metadata without scanning data files. This could result in wrong results when the partition metadata exists but the inclusive data files are empty. [info] - datetime-formatting-legacy.sql (1 second, 368 milliseconds) 09:44:34.422 WARN org.apache.spark.sql.internal.SQLConf: The SQL config 'spark.sql.optimizer.metadataOnly' has been deprecated in Spark v3.0 and may be removed in the future. Avoid to depend on this optimization to prevent a potential correctness issue. If you must use, use 'SparkSessionExtensions' instead to inject it as a custom rule. 09:44:34.495 WARN org.apache.spark.sql.execution.OptimizeMetadataOnlyQuery: Since configuration `spark.sql.optimizer.metadataOnly` is enabled, Spark will scan partition-level metadata without scanning data files. This could result in wrong results when the partition metadata exists but the inclusive data files are empty. 09:44:34.524 WARN org.apache.spark.sql.execution.OptimizeMetadataOnlyQuery: Since configuration `spark.sql.optimizer.metadataOnly` is enabled, Spark will scan partition-level metadata without scanning data files. This could result in wrong results when the partition metadata exists but the inclusive data files are empty. 09:44:34.578 WARN org.apache.spark.sql.internal.SQLConf: The SQL config 'spark.sql.optimizer.metadataOnly' has been deprecated in Spark v3.0 and may be removed in the future. Avoid to depend on this optimization to prevent a potential correctness issue. If you must use, use 'SparkSessionExtensions' instead to inject it as a custom rule. 09:44:34.637 WARN org.apache.spark.sql.execution.OptimizeMetadataOnlyQuery: Since configuration `spark.sql.optimizer.metadataOnly` is enabled, Spark will scan partition-level metadata without scanning data files. This could result in wrong results when the partition metadata exists but the inclusive data files are empty. 09:44:34.656 WARN org.apache.spark.sql.execution.OptimizeMetadataOnlyQuery: Since configuration `spark.sql.optimizer.metadataOnly` is enabled, Spark will scan partition-level metadata without scanning data files. This could result in wrong results when the partition metadata exists but the inclusive data files are empty. 09:44:34.698 WARN org.apache.spark.sql.internal.SQLConf: The SQL config 'spark.sql.optimizer.metadataOnly' has been deprecated in Spark v3.0 and may be removed in the future. Avoid to depend on this optimization to prevent a potential correctness issue. If you must use, use 'SparkSessionExtensions' instead to inject it as a custom rule. 09:44:34.766 WARN org.apache.spark.sql.execution.OptimizeMetadataOnlyQuery: Since configuration `spark.sql.optimizer.metadataOnly` is enabled, Spark will scan partition-level metadata without scanning data files. This could result in wrong results when the partition metadata exists but the inclusive data files are empty. 09:44:34.792 WARN org.apache.spark.sql.execution.OptimizeMetadataOnlyQuery: Since configuration `spark.sql.optimizer.metadataOnly` is enabled, Spark will scan partition-level metadata without scanning data files. This could result in wrong results when the partition metadata exists but the inclusive data files are empty. 09:44:34.845 WARN org.apache.spark.sql.internal.SQLConf: The SQL config 'spark.sql.optimizer.metadataOnly' has been deprecated in Spark v3.0 and may be removed in the future. Avoid to depend on this optimization to prevent a potential correctness issue. If you must use, use 'SparkSessionExtensions' instead to inject it as a custom rule. 09:44:34.899 WARN org.apache.spark.sql.internal.WithTestConf$$anon$4: The SQL config 'spark.sql.optimizer.metadataOnly' has been deprecated in Spark v3.0 and may be removed in the future. Avoid to depend on this optimization to prevent a potential correctness issue. If you must use, use 'SparkSessionExtensions' instead to inject it as a custom rule. [info] - SPARK-15752 optimize metadata only query for datasource table (3 seconds, 160 milliseconds) [info] - SPARK-16975: Column-partition path starting '_' should be handled correctly (327 milliseconds) [info] - explain.sql (1 second, 35 milliseconds) [info] - SPARK-16644: Aggregate should not put aggregate expressions to constraints (286 milliseconds) [info] - table-valued-functions.sql (380 milliseconds) [info] - SPARK-16674: field names containing dots for both fields and partitioned fields (437 milliseconds) [info] - SPARK-17515: CollectLimit.execute() should perform per-partition limits (105 milliseconds) [info] - CREATE TABLE USING should not fail if a same-name temp view exists (165 milliseconds) [info] - SPARK-18053: ARRAY equality is broken (292 milliseconds) [info] - SPARK-19157: should be able to change spark.sql.runSQLOnFiles at runtime (328 milliseconds) [info] - query_regex_column.sql (1 second, 295 milliseconds) [info] - should be able to resolve a persistent view (1 second, 9 milliseconds) [info] - SPARK-19059: read file based table whose name starts with underscore (267 milliseconds) [info] - SPARK-19334: check code injection is prevented (117 milliseconds) [info] - SPARK-19650: An action on a Command should not trigger a Spark job (64 milliseconds) [info] - SPARK-20164: AnalysisException should be tolerant to null query plan (2 milliseconds) [info] - SPARK-12868: Allow adding jars from hdfs (2 milliseconds) [info] - RuntimeReplaceable functions should not take extra parameters (5 milliseconds) [info] - SPARK-21228: InSet incorrect handling of structs (243 milliseconds) [info] - SPARK-21247: Allow case-insensitive type equality in Set operation (901 milliseconds) [info] - SPARK-21335: support un-aliased subquery (111 milliseconds) [info] - SPARK-21743: top-most limit should not cause memory leak (116 milliseconds) [info] - SPARK-21652: rule confliction of InferFiltersFromConstraints and ConstantPropagation (270 milliseconds) [info] - grouping_set.sql (3 seconds, 186 milliseconds) [info] - SPARK-23079: constraints should be inferred correctly with aliases (529 milliseconds) [info] - datetime-formatting-invalid.sql (370 milliseconds) [info] - SPARK-22266: the same aggregate function was calculated multiple times (328 milliseconds) [info] - Support filter clause for aggregate function with hash aggregate (322 milliseconds) [info] - Support filter clause for aggregate function uses SortAggregateExec (172 milliseconds) [info] - Non-deterministic aggregate functions should not be deduplicated (25 milliseconds) 09:44:41.558 WARN org.apache.spark.sql.execution.datasources.DataSource: Found duplicate column(s) in the data schema and the partition schema: `p` 09:44:41.558 WARN org.apache.spark.sql.execution.command.CreateDataSourceTableCommand: It is not recommended to create a table with overlapped data and partition columns, as Spark cannot store a valid table schema and has to infer it at runtime, which hurts performance. Please check your data files and remove the partition columns in it. 09:44:41.623 WARN org.apache.spark.sql.execution.datasources.DataSource: Found duplicate column(s) in the data schema and the partition schema: `p` [info] - literals.sql (890 milliseconds) [info] - SPARK-22356: overlapped columns between data and partition schema in data source tables (451 milliseconds) [info] - comparator.sql (226 milliseconds) [info] - SPARK-24696 ColumnPruning rule fails to remove extra Project (955 milliseconds) [info] - json-functions.sql (1 second, 559 milliseconds) [info] - ignored.sql !!! IGNORED !!! [info] - cte.sql (437 milliseconds) [info] - random.sql (202 milliseconds) [info] - SPARK-24940: coalesce and repartition hint (2 seconds, 73 milliseconds) [info] - SPARK-25084: 'distribute by' on multiple columns may lead to codegen issue (434 milliseconds) [info] - SPARK-25144 'distinct' causes memory leak (209 milliseconds) [info] - SPARK-25454: decimal division with negative scale (78 milliseconds) 09:44:45.742 WARN org.apache.spark.sql.execution.adaptive.InsertAdaptiveSparkPlan: spark.sql.adaptive.enabled is enabled but is not supported for query: SortMergeJoin [tdate#15782], [tdate#15791], Inner :- Project [tdate#15782, col1#15781 AS aliasCol1#15789] : +- SortMergeJoin [tdate#15782], [tdate#15786], Inner : :- FileScan csv default.tab1[col1#15781,TDATE#15782] Batched: false, DataFilters: [], Format: CSV, Location: InMemoryFileIndex[], PartitionFilters: [isnotnull(TDATE#15782), (TDATE#15782 >= 17393), dynamicpruning#15797 [TDATE#15782], dynamicpruni..., PushedFilters: [], ReadSchema: struct : : :- Filter ((tdate#15786 >= 17393) AND isnotnull(tdate#15786)) : : : +- Relation[TDATE#15786] parquet : : +- Project [tdate#15782 AS tdate#15791, col1#15781 AS aliasCol1#15792] : : +- Join Inner, (tdate#15782 = tdate#15786) : : :- Filter ((isnotnull(TDATE#15782) AND (TDATE#15782 >= 17393)) AND dynamicpruning#15798 [tdate#15782]) : : : : +- Filter ((tdate#15786 >= 17393) AND isnotnull(tdate#15786)) : : : : +- Relation[TDATE#15786] parquet : : : +- Relation[col1#15781,TDATE#15782] csv : : +- Filter ((tdate#15786 >= 17393) AND isnotnull(tdate#15786)) : : +- Relation[TDATE#15786] parquet : +- Project [TDATE#15786] : +- Filter ((tdate#15786 >= 17393) AND isnotnull(tdate#15786)) : +- FileScan parquet default.tab2[TDATE#15786] Batched: true, DataFilters: [(TDATE#15786 >= 17393), isnotnull(TDATE#15786)], Format: Parquet, Location: InMemoryFileIndex[file:/home/jenkins/workspace/SparkPullRequestBuilder/sql/core/spark-warehouse/o..., PartitionFilters: [], PushedFilters: [GreaterThanOrEqual(TDATE,2017-08-15), IsNotNull(TDATE)], ReadSchema: struct +- Project [tdate#15782 AS tdate#15791, col1#15781 AS aliasCol1#15792] +- SortMergeJoin [tdate#15782], [tdate#15786], Inner :- FileScan csv default.tab1[col1#15781,TDATE#15782] Batched: false, DataFilters: [], Format: CSV, Location: InMemoryFileIndex[], PartitionFilters: [isnotnull(TDATE#15782), (TDATE#15782 >= 17393), dynamicpruning#15798 [TDATE#15782]], PushedFilters: [], ReadSchema: struct : +- Filter ((tdate#15786 >= 17393) AND isnotnull(tdate#15786)) : +- Relation[TDATE#15786] parquet +- Project [TDATE#15786] +- Filter ((tdate#15786 >= 17393) AND isnotnull(tdate#15786)) +- FileScan parquet default.tab2[TDATE#15786] Batched: true, DataFilters: [(TDATE#15786 >= 17393), isnotnull(TDATE#15786)], Format: Parquet, Location: InMemoryFileIndex[file:/home/jenkins/workspace/SparkPullRequestBuilder/sql/core/spark-warehouse/o..., PartitionFilters: [], PushedFilters: [GreaterThanOrEqual(TDATE,2017-08-15), IsNotNull(TDATE)], ReadSchema: struct . [info] - SPARK-25988: self join with aliases on partitioned tables #1 (466 milliseconds) [info] - operators.sql (1 second, 977 milliseconds) [info] - SPARK-25988: self join with aliases on partitioned tables #2 (147 milliseconds) [info] - predicate-functions.sql (800 milliseconds) [info] - datetime-parsing.sql (598 milliseconds) [info] - array.sql (637 milliseconds) [info] - columnresolution-negative.sql (400 milliseconds) [info] - SPARK-26366: verify ReplaceExceptWithFilter (2 seconds, 441 milliseconds) [info] - SPARK-26402: accessing nested fields with different cases in case insensitive mode (221 milliseconds) [info] - SPARK-27699 Validate pushed down filters (884 milliseconds) 09:44:49.697 WARN org.apache.spark.sql.internal.WithTestConf$$anon$4: The SQL config 'spark.sql.optimizer.metadataOnly' has been deprecated in Spark v3.0 and may be removed in the future. Avoid to depend on this optimization to prevent a potential correctness issue. If you must use, use 'SparkSessionExtensions' instead to inject it as a custom rule. 09:44:49.891 WARN org.apache.spark.sql.execution.OptimizeMetadataOnlyQuery: Since configuration `spark.sql.optimizer.metadataOnly` is enabled, Spark will scan partition-level metadata without scanning data files. This could result in wrong results when the partition metadata exists but the inclusive data files are empty. 09:44:49.913 WARN org.apache.spark.sql.execution.OptimizeMetadataOnlyQuery: Since configuration `spark.sql.optimizer.metadataOnly` is enabled, Spark will scan partition-level metadata without scanning data files. This could result in wrong results when the partition metadata exists but the inclusive data files are empty. 09:44:49.965 WARN org.apache.spark.sql.internal.SQLConf: The SQL config 'spark.sql.optimizer.metadataOnly' has been deprecated in Spark v3.0 and may be removed in the future. Avoid to depend on this optimization to prevent a potential correctness issue. If you must use, use 'SparkSessionExtensions' instead to inject it as a custom rule. 09:44:50.098 WARN org.apache.spark.sql.internal.SQLConf: The SQL config 'spark.sql.optimizer.metadataOnly' has been deprecated in Spark v3.0 and may be removed in the future. Avoid to depend on this optimization to prevent a potential correctness issue. If you must use, use 'SparkSessionExtensions' instead to inject it as a custom rule. 09:44:50.508 WARN org.apache.spark.sql.execution.OptimizeMetadataOnlyQuery: Since configuration `spark.sql.optimizer.metadataOnly` is enabled, Spark will scan partition-level metadata without scanning data files. This could result in wrong results when the partition metadata exists but the inclusive data files are empty. 09:44:50.528 WARN org.apache.spark.sql.execution.OptimizeMetadataOnlyQuery: Since configuration `spark.sql.optimizer.metadataOnly` is enabled, Spark will scan partition-level metadata without scanning data files. This could result in wrong results when the partition metadata exists but the inclusive data files are empty. 09:44:50.572 WARN org.apache.spark.sql.internal.SQLConf: The SQL config 'spark.sql.optimizer.metadataOnly' has been deprecated in Spark v3.0 and may be removed in the future. Avoid to depend on this optimization to prevent a potential correctness issue. If you must use, use 'SparkSessionExtensions' instead to inject it as a custom rule. 09:44:50.733 WARN org.apache.spark.sql.internal.SQLConf: The SQL config 'spark.sql.optimizer.metadataOnly' has been deprecated in Spark v3.0 and may be removed in the future. Avoid to depend on this optimization to prevent a potential correctness issue. If you must use, use 'SparkSessionExtensions' instead to inject it as a custom rule. 09:44:50.813 WARN org.apache.spark.sql.internal.WithTestConf$$anon$4: The SQL config 'spark.sql.optimizer.metadataOnly' has been deprecated in Spark v3.0 and may be removed in the future. Avoid to depend on this optimization to prevent a potential correctness issue. If you must use, use 'SparkSessionExtensions' instead to inject it as a custom rule. 09:44:50.813 WARN org.apache.spark.sql.internal.WithTestConf$$anon$4: The SQL config 'spark.sql.optimizer.metadataOnly' has been deprecated in Spark v3.0 and may be removed in the future. Avoid to depend on this optimization to prevent a potential correctness issue. If you must use, use 'SparkSessionExtensions' instead to inject it as a custom rule. 09:44:51.125 WARN org.apache.spark.sql.internal.SQLConf: The SQL config 'spark.sql.optimizer.metadataOnly' has been deprecated in Spark v3.0 and may be removed in the future. Avoid to depend on this optimization to prevent a potential correctness issue. If you must use, use 'SparkSessionExtensions' instead to inject it as a custom rule. 09:44:51.250 WARN org.apache.spark.sql.internal.SQLConf: The SQL config 'spark.sql.optimizer.metadataOnly' has been deprecated in Spark v3.0 and may be removed in the future. Avoid to depend on this optimization to prevent a potential correctness issue. If you must use, use 'SparkSessionExtensions' instead to inject it as a custom rule. [info] - columnresolution.sql (2 seconds, 883 milliseconds) 09:44:51.737 WARN org.apache.spark.sql.internal.SQLConf: The SQL config 'spark.sql.optimizer.metadataOnly' has been deprecated in Spark v3.0 and may be removed in the future. Avoid to depend on this optimization to prevent a potential correctness issue. If you must use, use 'SparkSessionExtensions' instead to inject it as a custom rule. 09:44:51.950 WARN org.apache.spark.sql.internal.SQLConf: The SQL config 'spark.sql.optimizer.metadataOnly' has been deprecated in Spark v3.0 and may be removed in the future. Avoid to depend on this optimization to prevent a potential correctness issue. If you must use, use 'SparkSessionExtensions' instead to inject it as a custom rule. 09:44:52.026 WARN org.apache.spark.sql.internal.WithTestConf$$anon$4: The SQL config 'spark.sql.optimizer.metadataOnly' has been deprecated in Spark v3.0 and may be removed in the future. Avoid to depend on this optimization to prevent a potential correctness issue. If you must use, use 'SparkSessionExtensions' instead to inject it as a custom rule. [info] - SPARK-26709: OptimizeMetadataOnlyQuery does not handle empty records correctly (2 seconds, 330 milliseconds) [info] - reset command should not fail with cache (161 milliseconds) [info] - string date comparison (1 second, 744 milliseconds) [info] - string timestamp comparison (2 seconds, 103 milliseconds) [info] - SPARK-28156: self-join should not miss cached view (591 milliseconds) [info] - SPARK-29000: arithmetic computation overflow when don't allow decimal precision loss (305 milliseconds) [info] - SPARK-29239: Subquery should not cause NPE when eliminating subexpression (338 milliseconds) [info] - SPARK-29213: FilterExec should not throw NPE (385 milliseconds) [info] - SPARK-29682: Conflicting attributes in Expand are resolved (701 milliseconds) [info] - order-by-nulls-ordering.sql (7 seconds, 601 milliseconds) [info] - SPARK-29860: Fix dataType mismatch issue for InSubquery (1 second, 338 milliseconds) [info] - SPARK-30447: fix constant propagation inside NOT (119 milliseconds) [info] - SPARK-26218: Fix the corner case when casting float to Integer (23 milliseconds) [info] - SPARK-30870: Column pruning shouldn't alias a nested column for the whole structure (180 milliseconds) [info] - SPARK-30955: Exclude Generate output when aliasing in nested column pruning (280 milliseconds) [info] - datetime-formatting.sql (1 second, 305 milliseconds) [info] - string-functions.sql (1 second, 568 milliseconds) [info] - SPARK-30279 Support 32 or more grouping attributes for GROUPING_ID() (1 second, 792 milliseconds) [info] - SPARK-31166: UNION map and other maps should not fail (135 milliseconds) [info] - SPARK-31242: clone SparkSession should respect sessionInitWithConfigDefaults (3 milliseconds) [info] - SPARK-31594: Do not display the seed of rand/randn with no argument in output schema (34 milliseconds) [info] - union.sql (1 second, 11 milliseconds) [info] - order-by-ordinal.sql (662 milliseconds) [info] - SPARK-31670: Trim unnecessary Struct field alias in Aggregate/GroupingSets (3 seconds, 228 milliseconds) [info] - SPARK-31761: test byte, short, integer overflow for (Divide) integral type (230 milliseconds) [info] - normalize special floating numbers in subquery (752 milliseconds) 09:45:06.509 WARN org.apache.spark.sql.catalyst.analysis.HintErrorLogger: Unrecognized hint: COALESCE(2) 09:45:06.514 WARN org.apache.spark.sql.catalyst.analysis.HintErrorLogger: Unrecognized hint: REPARTITION(c1) 09:45:06.518 WARN org.apache.spark.sql.catalyst.analysis.HintErrorLogger: Unrecognized hint: REPARTITION(c1, 2) 09:45:06.522 WARN org.apache.spark.sql.catalyst.analysis.HintErrorLogger: Unrecognized hint: REPARTITION_BY_RANGE(c1, 2) 09:45:06.526 WARN org.apache.spark.sql.catalyst.analysis.HintErrorLogger: Unrecognized hint: REPARTITION_BY_RANGE(c1) 09:45:06.531 WARN org.apache.spark.sql.catalyst.analysis.HintErrorLogger: Unrecognized hint: BROADCASTJOIN(t1) 09:45:06.540 WARN org.apache.spark.sql.catalyst.analysis.HintErrorLogger: Unrecognized hint: MAPJOIN(t1) 09:45:06.546 WARN org.apache.spark.sql.catalyst.analysis.HintErrorLogger: Unrecognized hint: SHUFFLE_MERGE(t1) 09:45:06.553 WARN org.apache.spark.sql.catalyst.analysis.HintErrorLogger: Unrecognized hint: MERGEJOIN(t1) 09:45:06.560 WARN org.apache.spark.sql.catalyst.analysis.HintErrorLogger: Unrecognized hint: SHUFFLE_REPLICATE_NL(t1) [info] - SPARK-31875: remove hints from plan when spark.sql.optimizer.disableHints = true (83 milliseconds) [info] - SPARK-32372: ResolveReferences.dedupRight should only rewrite attributes for ancestor plans of the conflict plan (1 second, 475 milliseconds) [info] - SPARK-32280: Avoid duplicate rewrite attributes when there're multiple JOINs (496 milliseconds) [info] - SPARK-32788: non-partitioned table scan should not have partition filter (317 milliseconds) [info] - SPARK-33306: Timezone is needed when cast Date to String (494 milliseconds) [info] - pivot.sql (6 seconds, 440 milliseconds) [info] - SPARK-33338: GROUP BY using literal map should not fail (912 milliseconds) :: loading settings :: url = jar:file:/home/jenkins/sparkivy/per-executor-caches/0/.cache/coursier/v1/https/maven-central.storage-download.googleapis.com/maven2/org/apache/ivy/ivy/2.4.0/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml Ivy Default Cache set to: /home/jenkins/.ivy2/cache The jars for the packages stored in: /home/jenkins/.ivy2/jars org.apache.hive.hcatalog#hive-hcatalog-core added as a dependency :: resolving dependencies :: org.apache.spark#spark-submit-parent-03d4befa-6609-4352-b8bb-0442711a4ded;1.0 [not transitive] confs: [default] found org.apache.hive.hcatalog#hive-hcatalog-core;2.3.7 in central :: resolution report :: resolve 344ms :: artifacts dl 2ms :: modules in use: org.apache.hive.hcatalog#hive-hcatalog-core;2.3.7 from central in [default] --------------------------------------------------------------------- | | modules || artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| --------------------------------------------------------------------- | default | 1 | 0 | 0 | 0 || 1 | 0 | --------------------------------------------------------------------- :: retrieving :: org.apache.spark#spark-submit-parent-03d4befa-6609-4352-b8bb-0442711a4ded confs: [default] 0 artifacts copied, 1 already retrieved (0kB/5ms) Ivy Default Cache set to: /home/jenkins/.ivy2/cache The jars for the packages stored in: /home/jenkins/.ivy2/jars org.scala-js#scalajs-test-interface_2.12 added as a dependency :: resolving dependencies :: org.apache.spark#spark-submit-parent-60c2f854-7e5c-4f13-8358-3038dfde3aef;1.0 confs: [default] found org.scala-js#scalajs-test-interface_2.12;1.2.0 in central found org.scala-js#scalajs-library_2.12;1.2.0 in central :: resolution report :: resolve 37ms :: artifacts dl 4ms :: modules in use: org.scala-js#scalajs-library_2.12;1.2.0 from central in [default] org.scala-js#scalajs-test-interface_2.12;1.2.0 from central in [default] --------------------------------------------------------------------- | | modules || artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| --------------------------------------------------------------------- | default | 2 | 0 | 0 | 0 || 2 | 0 | --------------------------------------------------------------------- :: retrieving :: org.apache.spark#spark-submit-parent-60c2f854-7e5c-4f13-8358-3038dfde3aef confs: [default] 0 artifacts copied, 2 already retrieved (0kB/2ms) Ivy Default Cache set to: /home/jenkins/.ivy2/cache The jars for the packages stored in: /home/jenkins/.ivy2/jars org.apache.hive#hive-contrib added as a dependency :: resolving dependencies :: org.apache.spark#spark-submit-parent-7c74757a-0001-4137-8cc4-f6f02c24175d;1.0 confs: [default] found org.apache.hive#hive-contrib;2.3.7 in central found org.apache.hive#hive-exec;2.3.7 in central found org.apache.hive#hive-vector-code-gen;2.3.7 in central found commons-lang#commons-lang;2.6 in central found com.google.guava#guava;14.0.1 in central found org.apache.ant#ant;1.9.1 in central found org.apache.ant#ant-launcher;1.9.1 in central found org.apache.velocity#velocity;1.5 in central found oro#oro;2.0.8 in central found org.slf4j#slf4j-api;1.7.10 in central found org.apache.hive#hive-llap-tez;2.3.7 in central found org.apache.hive#hive-common;2.3.7 in central found org.apache.hive#hive-shims;2.3.7 in central found org.apache.hive.shims#hive-shims-common;2.3.7 in central found org.apache.logging.log4j#log4j-slf4j-impl;2.6.2 in central found org.apache.thrift#libthrift;0.9.3 in central found org.apache.httpcomponents#httpclient;4.4 in central found org.apache.httpcomponents#httpcore;4.4 in central found commons-logging#commons-logging;1.2 in central found commons-codec#commons-codec;1.4 in central found org.apache.curator#curator-framework;2.7.1 in central [info] - cte-legacy.sql (1 second, 226 milliseconds) found org.apache.curator#curator-client;2.7.1 in central found org.apache.zookeeper#zookeeper;3.4.6 in central found org.slf4j#slf4j-log4j12;1.7.6 in central found log4j#log4j;1.2.16 in central found jline#jline;2.12 in central found io.netty#netty;3.7.0.Final in central found org.apache.hive.shims#hive-shims-0.23;2.3.7 in central found org.apache.hadoop#hadoop-yarn-server-resourcemanager;2.7.2 in central found org.apache.hadoop#hadoop-annotations;2.7.2 in central found com.google.inject.extensions#guice-servlet;3.0 in central found com.google.inject#guice;3.0 in central found javax.inject#javax.inject;1 in central found aopalliance#aopalliance;1.0 in central found org.sonatype.sisu.inject#cglib;2.2.1-v20090111 in central found asm#asm;3.2 in central found com.google.protobuf#protobuf-java;2.5.0 in central found commons-io#commons-io;2.4 in central found com.sun.jersey#jersey-json;1.14 in central found org.codehaus.jettison#jettison;1.1 in central found com.sun.xml.bind#jaxb-impl;2.2.3-1 in central found javax.xml.bind#jaxb-api;2.2.2 in central found javax.xml.stream#stax-api;1.0-2 in central found javax.activation#activation;1.1 in central found org.codehaus.jackson#jackson-core-asl;1.9.13 in central found org.codehaus.jackson#jackson-mapper-asl;1.9.13 in central found org.codehaus.jackson#jackson-jaxrs;1.9.13 in central found org.codehaus.jackson#jackson-xc;1.9.13 in central found com.sun.jersey.contribs#jersey-guice;1.9 in central found org.apache.hadoop#hadoop-yarn-common;2.7.2 in central found org.apache.hadoop#hadoop-yarn-api;2.7.2 in central found org.apache.commons#commons-compress;1.9 in central found org.mortbay.jetty#jetty-util;6.1.26 in central found com.sun.jersey#jersey-core;1.14 in central found com.sun.jersey#jersey-client;1.9 in central found commons-cli#commons-cli;1.2 in central found com.sun.jersey#jersey-server;1.14 in central found org.apache.hadoop#hadoop-yarn-server-common;2.7.2 in central found org.fusesource.leveldbjni#leveldbjni-all;1.8 in central found org.apache.hadoop#hadoop-yarn-server-applicationhistoryservice;2.7.2 in central found commons-collections#commons-collections;3.2.2 in central found org.apache.hadoop#hadoop-yarn-server-web-proxy;2.7.2 in central found org.mortbay.jetty#jetty;6.1.26 in central found org.apache.hive.shims#hive-shims-scheduler;2.3.7 in central found org.apache.hive#hive-storage-api;2.4.0 in central found org.apache.commons#commons-lang3;3.1 in central found org.apache.orc#orc-core;1.3.4 in central found io.airlift#aircompressor;0.8 in central found io.airlift#slice;0.29 in central found org.openjdk.jol#jol-core;0.2 in central found org.eclipse.jetty.aggregate#jetty-all;7.6.0.v20120127 in central found org.apache.geronimo.specs#geronimo-jta_1.1_spec;1.1.1 in central found javax.mail#mail;1.4.1 in central found org.apache.geronimo.specs#geronimo-jaspic_1.0_spec;1.0 in central found org.apache.geronimo.specs#geronimo-annotation_1.0_spec;1.1.1 in central found asm#asm-commons;3.1 in central found asm#asm-tree;3.1 in central found org.eclipse.jetty.orbit#javax.servlet;3.0.0.v201112011016 in central found joda-time#joda-time;2.8.1 in central found org.apache.logging.log4j#log4j-1.2-api;2.6.2 in central found org.apache.logging.log4j#log4j-web;2.6.2 in central found com.tdunning#json;1.8 in central found io.dropwizard.metrics#metrics-core;3.1.0 in central found io.dropwizard.metrics#metrics-jvm;3.1.0 in central found io.dropwizard.metrics#metrics-json;3.1.0 in central found com.fasterxml.jackson.core#jackson-databind;2.6.5 in central found com.fasterxml.jackson.core#jackson-annotations;2.6.0 in central found com.fasterxml.jackson.core#jackson-core;2.6.5 in central found com.github.joshelser#dropwizard-metrics-hadoop-metrics2-reporter;0.1.2 in central found org.apache.hadoop#hadoop-common;2.7.2 in central found org.apache.commons#commons-math3;3.1.1 in central found xmlenc#xmlenc;0.52 in central found commons-httpclient#commons-httpclient;3.0.1 in central found junit#junit;4.11 in central found org.hamcrest#hamcrest-core;1.3 in central found commons-net#commons-net;3.1 in central found javax.servlet#servlet-api;2.5 in central found net.java.dev.jets3t#jets3t;0.9.0 in central found com.jamesmurty.utils#java-xmlbuilder;0.4 in central found commons-configuration#commons-configuration;1.6 in central found commons-digester#commons-digester;1.8 in central found commons-beanutils#commons-beanutils;1.7.0 in central found commons-beanutils#commons-beanutils-core;1.8.0 in central found org.apache.avro#avro;1.7.7 in central found com.thoughtworks.paranamer#paranamer;2.3 in central found org.xerial.snappy#snappy-java;1.0.5 in central found com.google.code.gson#gson;2.2.4 in central found org.apache.hadoop#hadoop-auth;2.7.2 in central found org.apache.directory.server#apacheds-kerberos-codec;2.0.0-M15 in central found org.apache.directory.server#apacheds-i18n;2.0.0-M15 in central found org.apache.directory.api#api-asn1-api;1.0.0-M20 in central found org.apache.directory.api#api-util;1.0.0-M20 in central found com.jcraft#jsch;0.1.42 in central found org.apache.curator#curator-recipes;2.7.1 in central found com.google.code.findbugs#jsr305;3.0.0 in central found org.apache.htrace#htrace-core;3.1.0-incubating in central found org.apache.hive#hive-llap-client;2.3.7 in central found org.apache.hive#hive-llap-common;2.3.7 in central found org.apache.hive#hive-serde;2.3.7 in central found org.apache.hive#hive-service-rpc;2.3.7 in central found tomcat#jasper-compiler;5.5.23 in central found javax.servlet#jsp-api;2.0 in central found ant#ant;1.6.5 in central found tomcat#jasper-runtime;5.5.23 in central found commons-el#commons-el;1.0 in central found org.apache.thrift#libfb303;0.9.3 in central found net.sf.opencsv#opencsv;2.3 in central found org.apache.parquet#parquet-hadoop-bundle;1.8.1 in central found javax.servlet.jsp#jsp-api;2.1 in central found org.slf4j#slf4j-log4j12;1.7.14 in central found org.apache.curator#apache-curator;2.7.1 in central found org.antlr#antlr-runtime;3.5.2 in central found org.antlr#ST4;4.0.4 in central found org.apache.ivy#ivy;2.4.0 in central found org.codehaus.groovy#groovy-all;2.4.4 in central found org.datanucleus#datanucleus-core;4.1.17 in central found org.apache.calcite#calcite-core;1.10.0 in central found org.apache.calcite.avatica#avatica;1.8.0 in central found org.apache.calcite.avatica#avatica-metrics;1.8.0 in central found org.apache.calcite#calcite-linq4j;1.10.0 in central found commons-dbcp#commons-dbcp;1.4 in central found commons-pool#commons-pool;1.5.4 in central found org.apache.commons#commons-lang3;3.2 in central found net.hydromatic#eigenbase-properties;1.1.5 in central found org.codehaus.janino#janino;2.7.6 in central found org.codehaus.janino#commons-compiler;2.7.6 in central found org.apache.calcite#calcite-druid;1.10.0 in central found stax#stax-api;1.0.1 in central found com.fasterxml.jackson.core#jackson-annotations;2.6.3 in central :: resolution report :: resolve 2153ms :: artifacts dl 41ms :: modules in use: ant#ant;1.6.5 from central in [default] aopalliance#aopalliance;1.0 from central in [default] asm#asm;3.2 from central in [default] asm#asm-commons;3.1 from central in [default] asm#asm-tree;3.1 from central in [default] com.fasterxml.jackson.core#jackson-annotations;2.6.3 from central in [default] com.fasterxml.jackson.core#jackson-core;2.6.5 from central in [default] com.fasterxml.jackson.core#jackson-databind;2.6.5 from central in [default] com.github.joshelser#dropwizard-metrics-hadoop-metrics2-reporter;0.1.2 from central in [default] com.google.code.findbugs#jsr305;3.0.0 from central in [default] com.google.code.gson#gson;2.2.4 from central in [default] com.google.guava#guava;14.0.1 from central in [default] com.google.inject#guice;3.0 from central in [default] com.google.inject.extensions#guice-servlet;3.0 from central in [default] com.google.protobuf#protobuf-java;2.5.0 from central in [default] com.jamesmurty.utils#java-xmlbuilder;0.4 from central in [default] com.jcraft#jsch;0.1.42 from central in [default] com.sun.jersey#jersey-client;1.9 from central in [default] com.sun.jersey#jersey-core;1.14 from central in [default] com.sun.jersey#jersey-json;1.14 from central in [default] com.sun.jersey#jersey-server;1.14 from central in [default] com.sun.jersey.contribs#jersey-guice;1.9 from central in [default] com.sun.xml.bind#jaxb-impl;2.2.3-1 from central in [default] com.tdunning#json;1.8 from central in [default] com.thoughtworks.paranamer#paranamer;2.3 from central in [default] commons-beanutils#commons-beanutils;1.7.0 from central in [default] commons-beanutils#commons-beanutils-core;1.8.0 from central in [default] commons-cli#commons-cli;1.2 from central in [default] commons-codec#commons-codec;1.4 from central in [default] commons-collections#commons-collections;3.2.2 from central in [default] commons-configuration#commons-configuration;1.6 from central in [default] commons-dbcp#commons-dbcp;1.4 from central in [default] commons-digester#commons-digester;1.8 from central in [default] commons-el#commons-el;1.0 from central in [default] commons-httpclient#commons-httpclient;3.0.1 from central in [default] commons-io#commons-io;2.4 from central in [default] commons-lang#commons-lang;2.6 from central in [default] commons-logging#commons-logging;1.2 from central in [default] commons-net#commons-net;3.1 from central in [default] commons-pool#commons-pool;1.5.4 from central in [default] io.airlift#aircompressor;0.8 from central in [default] io.airlift#slice;0.29 from central in [default] io.dropwizard.metrics#metrics-core;3.1.0 from central in [default] io.dropwizard.metrics#metrics-json;3.1.0 from central in [default] io.dropwizard.metrics#metrics-jvm;3.1.0 from central in [default] io.netty#netty;3.7.0.Final from central in [default] javax.activation#activation;1.1 from central in [default] javax.inject#javax.inject;1 from central in [default] javax.mail#mail;1.4.1 from central in [default] javax.servlet#jsp-api;2.0 from central in [default] javax.servlet#servlet-api;2.5 from central in [default] javax.servlet.jsp#jsp-api;2.1 from central in [default] javax.xml.bind#jaxb-api;2.2.2 from central in [default] javax.xml.stream#stax-api;1.0-2 from central in [default] jline#jline;2.12 from central in [default] joda-time#joda-time;2.8.1 from central in [default] junit#junit;4.11 from central in [default] log4j#log4j;1.2.16 from central in [default] net.hydromatic#eigenbase-properties;1.1.5 from central in [default] net.java.dev.jets3t#jets3t;0.9.0 from central in [default] net.sf.opencsv#opencsv;2.3 from central in [default] org.antlr#ST4;4.0.4 from central in [default] org.antlr#antlr-runtime;3.5.2 from central in [default] org.apache.ant#ant;1.9.1 from central in [default] org.apache.ant#ant-launcher;1.9.1 from central in [default] org.apache.avro#avro;1.7.7 from central in [default] org.apache.calcite#calcite-core;1.10.0 from central in [default] org.apache.calcite#calcite-druid;1.10.0 from central in [default] org.apache.calcite#calcite-linq4j;1.10.0 from central in [default] org.apache.calcite.avatica#avatica;1.8.0 from central in [default] org.apache.calcite.avatica#avatica-metrics;1.8.0 from central in [default] org.apache.commons#commons-compress;1.9 from central in [default] org.apache.commons#commons-lang3;3.2 from central in [default] org.apache.commons#commons-math3;3.1.1 from central in [default] org.apache.curator#apache-curator;2.7.1 from central in [default] org.apache.curator#curator-client;2.7.1 from central in [default] org.apache.curator#curator-framework;2.7.1 from central in [default] org.apache.curator#curator-recipes;2.7.1 from central in [default] org.apache.directory.api#api-asn1-api;1.0.0-M20 from central in [default] org.apache.directory.api#api-util;1.0.0-M20 from central in [default] org.apache.directory.server#apacheds-i18n;2.0.0-M15 from central in [default] org.apache.directory.server#apacheds-kerberos-codec;2.0.0-M15 from central in [default] org.apache.geronimo.specs#geronimo-annotation_1.0_spec;1.1.1 from central in [default] org.apache.geronimo.specs#geronimo-jaspic_1.0_spec;1.0 from central in [default] org.apache.geronimo.specs#geronimo-jta_1.1_spec;1.1.1 from central in [default] org.apache.hadoop#hadoop-annotations;2.7.2 from central in [default] org.apache.hadoop#hadoop-auth;2.7.2 from central in [default] org.apache.hadoop#hadoop-common;2.7.2 from central in [default] org.apache.hadoop#hadoop-yarn-api;2.7.2 from central in [default] org.apache.hadoop#hadoop-yarn-common;2.7.2 from central in [default] org.apache.hadoop#hadoop-yarn-server-applicationhistoryservice;2.7.2 from central in [default] org.apache.hadoop#hadoop-yarn-server-common;2.7.2 from central in [default] org.apache.hadoop#hadoop-yarn-server-resourcemanager;2.7.2 from central in [default] org.apache.hadoop#hadoop-yarn-server-web-proxy;2.7.2 from central in [default] org.apache.hive#hive-common;2.3.7 from central in [default] org.apache.hive#hive-contrib;2.3.7 from central in [default] org.apache.hive#hive-exec;2.3.7 from central in [default] org.apache.hive#hive-llap-client;2.3.7 from central in [default] org.apache.hive#hive-llap-common;2.3.7 from central in [default] org.apache.hive#hive-llap-tez;2.3.7 from central in [default] org.apache.hive#hive-serde;2.3.7 from central in [default] org.apache.hive#hive-service-rpc;2.3.7 from central in [default] org.apache.hive#hive-shims;2.3.7 from central in [default] org.apache.hive#hive-storage-api;2.4.0 from central in [default] org.apache.hive#hive-vector-code-gen;2.3.7 from central in [default] org.apache.hive.shims#hive-shims-0.23;2.3.7 from central in [default] org.apache.hive.shims#hive-shims-common;2.3.7 from central in [default] org.apache.hive.shims#hive-shims-scheduler;2.3.7 from central in [default] org.apache.htrace#htrace-core;3.1.0-incubating from central in [default] org.apache.httpcomponents#httpclient;4.4 from central in [default] org.apache.httpcomponents#httpcore;4.4 from central in [default] org.apache.ivy#ivy;2.4.0 from central in [default] org.apache.logging.log4j#log4j-1.2-api;2.6.2 from central in [default] org.apache.logging.log4j#log4j-slf4j-impl;2.6.2 from central in [default] org.apache.logging.log4j#log4j-web;2.6.2 from central in [default] org.apache.orc#orc-core;1.3.4 from central in [default] org.apache.parquet#parquet-hadoop-bundle;1.8.1 from central in [default] org.apache.thrift#libfb303;0.9.3 from central in [default] org.apache.thrift#libthrift;0.9.3 from central in [default] org.apache.velocity#velocity;1.5 from central in [default] org.apache.zookeeper#zookeeper;3.4.6 from central in [default] org.codehaus.groovy#groovy-all;2.4.4 from central in [default] org.codehaus.jackson#jackson-core-asl;1.9.13 from central in [default] org.codehaus.jackson#jackson-jaxrs;1.9.13 from central in [default] org.codehaus.jackson#jackson-mapper-asl;1.9.13 from central in [default] org.codehaus.jackson#jackson-xc;1.9.13 from central in [default] org.codehaus.janino#commons-compiler;2.7.6 from central in [default] org.codehaus.janino#janino;2.7.6 from central in [default] org.codehaus.jettison#jettison;1.1 from central in [default] org.datanucleus#datanucleus-core;4.1.17 from central in [default] org.eclipse.jetty.aggregate#jetty-all;7.6.0.v20120127 from central in [default] org.eclipse.jetty.orbit#javax.servlet;3.0.0.v201112011016 from central in [default] org.fusesource.leveldbjni#leveldbjni-all;1.8 from central in [default] org.hamcrest#hamcrest-core;1.3 from central in [default] org.mortbay.jetty#jetty;6.1.26 from central in [default] org.mortbay.jetty#jetty-util;6.1.26 from central in [default] org.openjdk.jol#jol-core;0.2 from central in [default] org.slf4j#slf4j-api;1.7.10 from central in [default] org.slf4j#slf4j-log4j12;1.7.14 from central in [default] org.sonatype.sisu.inject#cglib;2.2.1-v20090111 from central in [default] org.xerial.snappy#snappy-java;1.0.5 from central in [default] oro#oro;2.0.8 from central in [default] stax#stax-api;1.0.1 from central in [default] tomcat#jasper-compiler;5.5.23 from central in [default] tomcat#jasper-runtime;5.5.23 from central in [default] xmlenc#xmlenc;0.52 from central in [default] :: evicted modules: org.slf4j#slf4j-log4j12;1.7.6 by [org.slf4j#slf4j-log4j12;1.7.14] in [default] log4j#log4j;1.2.17 by [log4j#log4j;1.2.16] in [default] commons-logging#commons-logging;1.1.3 by [commons-logging#commons-logging;1.2] in [default] org.apache.commons#commons-lang3;3.1 by [org.apache.commons#commons-lang3;3.2] in [default] asm#asm;3.1 by [asm#asm;3.2] in [default] com.fasterxml.jackson.core#jackson-databind;2.4.2 by [com.fasterxml.jackson.core#jackson-databind;2.6.5] in [default] com.fasterxml.jackson.core#jackson-annotations;2.6.0 by [com.fasterxml.jackson.core#jackson-annotations;2.6.3] in [default] io.dropwizard.metrics#metrics-core;3.1.2 by [io.dropwizard.metrics#metrics-core;3.1.0] in [default] javax.servlet#servlet-api;2.4 by [javax.servlet#servlet-api;2.5] in [default] commons-logging#commons-logging;1.0.3 by [commons-logging#commons-logging;1.2] in [default] com.google.code.findbugs#jsr305;1.3.9 by [com.google.code.findbugs#jsr305;3.0.0] in [default] com.fasterxml.jackson.core#jackson-core;2.6.3 by [com.fasterxml.jackson.core#jackson-core;2.6.5] in [default] com.fasterxml.jackson.core#jackson-databind;2.6.3 by [com.fasterxml.jackson.core#jackson-databind;2.6.5] in [default] --------------------------------------------------------------------- | | modules || artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| --------------------------------------------------------------------- | default | 159 | 0 | 0 | 13 || 146 | 0 | --------------------------------------------------------------------- :: retrieving :: org.apache.spark#spark-submit-parent-7c74757a-0001-4137-8cc4-f6f02c24175d confs: [default] 0 artifacts copied, 146 already retrieved (0kB/31ms) 09:45:13.280 ERROR org.apache.spark.SparkContext: Failed to add /home/jenkins/.ivy2/jars/org.apache.curator_apache-curator-2.7.1.jar to Spark environment java.io.FileNotFoundException: Jar /home/jenkins/.ivy2/jars/org.apache.curator_apache-curator-2.7.1.jar not found at org.apache.spark.SparkContext.addLocalJarFile$1(SparkContext.scala:1935) at org.apache.spark.SparkContext.addJar(SparkContext.scala:1988) at org.apache.spark.SparkContext.addJar(SparkContext.scala:1928) at org.apache.spark.sql.internal.SessionResourceLoader.$anonfun$addJar$1(SessionState.scala:181) at org.apache.spark.sql.internal.SessionResourceLoader.$anonfun$addJar$1$adapted(SessionState.scala:180) at scala.collection.mutable.ArraySeq.foreach(ArraySeq.scala:75) at org.apache.spark.sql.internal.SessionResourceLoader.addJar(SessionState.scala:180) at org.apache.spark.sql.execution.command.AddJarCommand.run(resources.scala:40) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68) at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79) at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:228) at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3699) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3697) at org.apache.spark.sql.Dataset.(Dataset.scala:228) at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:96) at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:615) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:610) at org.apache.spark.sql.test.SQLTestUtilsBase.$anonfun$sql$1(SQLTestUtils.scala:231) at org.apache.spark.sql.SQLQuerySuite.$anonfun$new$825(SQLQuerySuite.scala:3737) at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85) at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83) at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104) at org.scalatest.Transformer.apply(Transformer.scala:22) at org.scalatest.Transformer.apply(Transformer.scala:20) at org.scalatest.funsuite.AnyFunSuiteLike$$anon$1.apply(AnyFunSuiteLike.scala:190) at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:176) at org.scalatest.funsuite.AnyFunSuiteLike.invokeWithFixture$1(AnyFunSuiteLike.scala:188) at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTest$1(AnyFunSuiteLike.scala:200) at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306) at org.scalatest.funsuite.AnyFunSuiteLike.runTest(AnyFunSuiteLike.scala:200) at org.scalatest.funsuite.AnyFunSuiteLike.runTest$(AnyFunSuiteLike.scala:182) at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:61) at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:234) at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:227) at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:61) at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTests$1(AnyFunSuiteLike.scala:233) at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:413) at scala.collection.immutable.List.foreach(List.scala:392) at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401) at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:396) at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:475) at org.scalatest.funsuite.AnyFunSuiteLike.runTests(AnyFunSuiteLike.scala:233) at org.scalatest.funsuite.AnyFunSuiteLike.runTests$(AnyFunSuiteLike.scala:232) at org.scalatest.funsuite.AnyFunSuite.runTests(AnyFunSuite.scala:1563) at org.scalatest.Suite.run(Suite.scala:1112) at org.scalatest.Suite.run$(Suite.scala:1094) at org.scalatest.funsuite.AnyFunSuite.org$scalatest$funsuite$AnyFunSuiteLike$$super$run(AnyFunSuite.scala:1563) at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$run$1(AnyFunSuiteLike.scala:237) at org.scalatest.SuperEngine.runImpl(Engine.scala:535) at org.scalatest.funsuite.AnyFunSuiteLike.run(AnyFunSuiteLike.scala:237) at org.scalatest.funsuite.AnyFunSuiteLike.run$(AnyFunSuiteLike.scala:236) at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:61) at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213) at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210) at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208) at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:61) at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:318) at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:513) at sbt.ForkMain$Run.lambda$runTest$1(ForkMain.java:413) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) [info] - SPARK-33084: Add jar support Ivy URI in SQL (3 seconds, 172 milliseconds) [info] - SPARK-33677: LikeSimplification should be skipped if pattern contains any escapeChar (1 second, 584 milliseconds) [info] - subexp-elimination.sql (3 seconds, 676 milliseconds) [info] - limit partition num to 1 when distributing by foldable expressions (61 milliseconds) [info] - Fold RepartitionExpression num partition should check if partition expression is empty (48 milliseconds) [info] - SPARK-34030: Fold RepartitionExpression num partition should at Optimizer (14 milliseconds) [info] - regexp-functions.sql (850 milliseconds) [info] - map.sql (67 milliseconds) [info] - SPARK-33593: Vector reader got incorrect data with binary partition value (1 second, 458 milliseconds) Ivy Default Cache set to: /home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/spark-4f66923b-4893-483f-9fe1-63d7f79e3e80/cache The jars for the packages stored in: /home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/spark-4f66923b-4893-483f-9fe1-63d7f79e3e80/jars org.apache.spark#SPARK-33084 added as a dependency :: resolving dependencies :: org.apache.spark#spark-submit-parent-5db8f376-7709-4637-8110-d387025b36e3;1.0 [not transitive] confs: [default] found org.apache.spark#SPARK-33084;1.0 in local-ivy-cache downloading /home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/spark-4f66923b-4893-483f-9fe1-63d7f79e3e80/local/org.apache.spark/SPARK-33084/1.0/jars/SPARK-33084.jar ... [SUCCESSFUL ] org.apache.spark#SPARK-33084;1.0!SPARK-33084.jar (3ms) :: resolution report :: resolve 560ms :: artifacts dl 6ms :: modules in use: org.apache.spark#SPARK-33084;1.0 from local-ivy-cache in [default] --------------------------------------------------------------------- | | modules || artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| --------------------------------------------------------------------- | default | 1 | 1 | 0 | 0 || 1 | 1 | --------------------------------------------------------------------- :: retrieving :: org.apache.spark#spark-submit-parent-5db8f376-7709-4637-8110-d387025b36e3 confs: [default] 1 artifacts copied, 0 already retrieved (5kB/9ms) [info] - SPARK-33084: Add jar support Ivy URI in SQL -- jar contains udf class (1 second, 36 milliseconds) [info] - SPARK-33964: Combine distinct unions that have noop project between them (87 milliseconds) [info] - SPARK-33591: null as a partition value (443 milliseconds) 09:45:18.289 WARN org.apache.spark.sql.SQLQuerySuite: ===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.SQLQuerySuite, thread names: rpc-boss-3-1, QueryStageCreator-9, subquery-3, QueryStageCreator-10, files-client-8-1, QueryStageCreator-6, QueryStageCreator-15, QueryStageCreator-3, QueryStageCreator-1, QueryStageCreator-4, BroadcastStageTimeout, QueryStageCreator-11, QueryStageCreator-8, subquery-2, Keep-Alive-Timer, QueryStageCreator-14, QueryStageCreator-7, QueryStageCreator-5, QueryStageCreator-12, subquery-1, QueryStageCreator-13, subquery-0, shuffle-boss-6-1, subquery-4, QueryStageCreator-0, QueryStageCreator-2 ===== [info] MemorySourceStressSuite: 09:45:18.443 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:45:18.725 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:45:19.115 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:45:19.391 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. [info] - higher-order-functions.sql (3 seconds, 506 milliseconds) 09:45:19.614 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. [info] - columnresolution-views.sql (447 milliseconds) 09:45:20.104 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:45:20.590 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:45:20.827 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. [info] - natural-join.sql (850 milliseconds) 09:45:21.069 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:45:21.295 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:45:21.542 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. [info] - memory stress test (3 seconds, 223 milliseconds) 09:45:21.634 WARN org.apache.spark.sql.streaming.MemorySourceStressSuite: ===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.streaming.MemorySourceStressSuite, thread names: rpc-boss-10-1, shuffle-boss-13-1 ===== [info] SQLEventFilterBuilderSuite: [info] - track live SQL executions (10 milliseconds) [info] HiveResultSuite: [info] - explain-aqe.sql (834 milliseconds) 09:45:21.782 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored: [info] - show_columns.sql (180 milliseconds) [info] - date formatting in hive result (721 milliseconds) [info] - timestamp formatting in hive result (120 milliseconds) [info] - toHiveString correctly handles UDTs (2 milliseconds) [info] - decimal formatting in hive result (159 milliseconds) [info] - SHOW TABLES in hive result (140 milliseconds) [info] - DESCRIBE TABLE in hive result (96 milliseconds) 09:45:22.980 WARN org.apache.spark.sql.execution.HiveResultSuite: ===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.execution.HiveResultSuite, thread names: rpc-boss-16-1, shuffle-boss-19-1 ===== [info] ConnectionProviderSuite: 09:45:23.032 ERROR org.apache.spark.sql.execution.datasources.jdbc.connection.ConnectionProvider: Failed to load built-in provider. 09:45:23.046 ERROR org.apache.spark.sql.execution.datasources.jdbc.connection.ConnectionProvider: Failed to load built-in provider. [info] - All built-in providers must be loaded (20 milliseconds) 09:45:23.050 ERROR org.apache.spark.sql.execution.datasources.jdbc.connection.ConnectionProvider: Failed to load built-in provider. [info] - Disabled provider must not be loaded (3 milliseconds) [info] - Multiple security configs must be reachable (166 milliseconds) 09:45:23.251 WARN org.apache.spark.sql.execution.datasources.jdbc.connection.ConnectionProviderSuite: ===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.execution.datasources.jdbc.connection.ConnectionProviderSuite, thread names: shuffle-boss-25-1, rpc-boss-22-1 ===== [info] DSV2CharVarcharDDLTestSuite: [info] - describe-part-after-analyze.sql (1 second, 352 milliseconds) [info] - allow to change column for char(x) to char(y), x == y (63 milliseconds) [info] - not allow to change column for char(x) to char(y), x != y (24 milliseconds) [info] - not allow to change column from string to char type (21 milliseconds) [info] - not allow to change column from int to char type (20 milliseconds) [info] - allow to change column for varchar(x) to varchar(y), x == y (27 milliseconds) [info] - not allow to change column for varchar(x) to varchar(y), x > y (20 milliseconds) [info] - SPARK-33901: alter table add columns should not change original table's schema (157 milliseconds) [info] - SPARK-33901: ctas should should not change table's schema (159 milliseconds) [info] - allow to change change column from char to string type (24 milliseconds) [info] - allow to change column from char(x) to varchar(y) type x <= y (44 milliseconds) [info] - allow to change column from varchar(x) to varchar(y) type x <= y (29 milliseconds) [info] - not allow to change column from char(x) to varchar(y) type x > y (15 milliseconds) 09:45:23.956 WARN org.apache.spark.sql.execution.command.DSV2CharVarcharDDLTestSuite: ===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.execution.command.DSV2CharVarcharDDLTestSuite, thread names: rpc-boss-28-1, shuffle-boss-31-1 ===== [info] SparkPlanSuite: [info] - SPARK-21619 execution of a canonicalized plan should fail (18 milliseconds) [info] - SPARK-23731 plans should be canonicalizable after being (de)serialized (240 milliseconds) [info] - SPARK-27418 BatchScanExec should be canonicalizable after being (de)serialized (242 milliseconds) [info] - SPARK-25357 SparkPlanInfo of FileScan contains nonEmpty metadata (243 milliseconds) [info] - SPARK-30780 empty LocalTableScan should use RDD without partitions (1 millisecond) [info] - bitwise.sql (1 second, 480 milliseconds) [info] - SPARK-33617: change default parallelism of LocalTableScan (50 milliseconds) 09:45:24.833 WARN org.apache.spark.sql.execution.SparkPlanSuite: ===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.execution.SparkPlanSuite, thread names: shuffle-boss-37-1, rpc-boss-34-1 ===== [info] StreamingAggregationSuite: 09:45:24.972 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:45:26.421 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:45:26.645 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:45:26.646 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:45:26.768 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:45:26.789 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:45:26.874 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - simple count, update mode - state format version 1 (2 seconds, 619 milliseconds) 09:45:27.547 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. [info] - except.sql (3 seconds, 33 milliseconds) 09:45:28.829 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:45:29.093 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:45:29.094 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:45:29.209 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:45:29.216 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:45:29.313 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - simple count, update mode - state format version 2 (2 seconds, 496 milliseconds) 09:45:30.046 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. [info] - having.sql (2 seconds, 329 milliseconds) [info] - tablesample-negative.sql (174 milliseconds) [info] - current_database_catalog.sql (35 milliseconds) [info] - count distinct - state format version 1 (696 milliseconds) 09:45:30.739 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. [info] - count distinct - state format version 2 (646 milliseconds) 09:45:31.393 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. [info] - cast.sql (1 second, 468 milliseconds) 09:45:32.677 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:45:32.912 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:45:32.913 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:45:33.030 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:45:33.047 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:45:33.137 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - simple count, complete mode - state format version 1 (2 seconds, 614 milliseconds) 09:45:34.011 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:45:35.293 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:45:35.508 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:45:35.509 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:45:35.621 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:45:35.637 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:45:35.722 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - simple count, complete mode - state format version 2 (2 seconds, 549 milliseconds) 09:45:36.539 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. [info] - simple count, append mode - state format version 1 (19 milliseconds) 09:45:36.552 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. [info] - simple count, append mode - state format version 2 (11 milliseconds) 09:45:36.591 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:45:38.724 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:45:38.998 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:45:38.999 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:45:39.116 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:45:39.135 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:45:39.220 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - sort after aggregate in complete mode - state format version 1 (4 seconds, 388 milliseconds) 09:45:40.986 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:45:43.070 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:45:43.325 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:45:43.327 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:45:43.426 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:45:43.433 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:45:43.524 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - sort after aggregate in complete mode - state format version 2 (4 seconds, 166 milliseconds) 09:45:45.191 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. [info] - group-by.sql (15 seconds, 741 milliseconds) [info] - state metrics - append mode - state format version 1 (3 seconds, 545 milliseconds) 09:45:48.709 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. [info] - state metrics - append mode - state format version 2 (3 seconds, 161 milliseconds) 09:45:51.853 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:45:53.436 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. [info] - state metrics - update/complete mode - state format version 1 (3 seconds, 124 milliseconds) 09:45:54.975 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:45:56.486 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. [info] - state metrics - update/complete mode - state format version 2 (2 seconds, 991 milliseconds) 09:45:57.984 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. [info] - multiple keys - state format version 1 (1 second, 470 milliseconds) 09:45:59.433 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. [info] - multiple keys - state format version 2 (1 second, 424 milliseconds) 09:46:00.854 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:46:01.720 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:46:02.002 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:02.023 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:02.112 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:02.133 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:02.215 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - SPARK-29438: ensure UNION doesn't lead streaming aggregation to use shifted partition IDs - state format version 1 (1 second, 550 milliseconds) 09:46:02.402 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:46:03.254 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:46:03.542 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:03.563 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:03.654 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:03.663 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:03.744 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - SPARK-29438: ensure UNION doesn't lead streaming aggregation to use shifted partition IDs - state format version 2 (1 second, 538 milliseconds) [info] - midbatch failure - state format version 1 (1 second, 175 milliseconds) [info] - midbatch failure - state format version 2 (1 second, 69 milliseconds) 09:46:06.223 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:46:08.435 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:46:08.618 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:08.620 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:08.736 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:08.754 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:08.834 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - prune results by current_time, complete mode - state format version 1 (3 seconds, 780 milliseconds) 09:46:09.975 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:46:11.959 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:46:12.153 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:12.153 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:12.263 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:12.285 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:12.378 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - prune results by current_time, complete mode - state format version 2 (3 seconds, 240 milliseconds) 09:46:13.247 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:46:15.819 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:46:16.164 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:16.166 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:16.290 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:16.295 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:16.381 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - prune results by current_date, complete mode - state format version 1 (4 seconds, 127 milliseconds) 09:46:17.367 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:46:19.849 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:46:20.112 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:20.114 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:20.239 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:20.245 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:20.333 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - prune results by current_date, complete mode - state format version 2 (3 seconds, 930 milliseconds) 09:46:21.294 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. [info] - SPARK-19690: do not convert batch aggregation in streaming query to streaming - state format version 1 (992 milliseconds) 09:46:22.293 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. [info] - SPARK-19690: do not convert batch aggregation in streaming query to streaming - state format version 2 (929 milliseconds) 09:46:23.202 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. [info] - SPARK-21977: coalesce(1) with 0 partition RDD should be repartitioned to 1 - state format version 1 (1 second, 506 milliseconds) 09:46:24.697 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. [info] - SPARK-21977: coalesce(1) with 0 partition RDD should be repartitioned to 1 - state format version 2 (1 second, 429 milliseconds) 09:46:26.137 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:46:27.030 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:46:27.332 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:27.333 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:27.461 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:27.493 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:27.600 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - SPARK-21977: coalesce(1) with aggregation should still be repartitioned when it has non-empty grouping keys - state format version 1 (2 seconds, 291 milliseconds) 09:46:28.420 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:46:29.221 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:46:29.493 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:29.494 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:29.626 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:29.646 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:29.748 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - SPARK-21977: coalesce(1) with aggregation should still be repartitioned when it has non-empty grouping keys - state format version 2 (2 seconds, 195 milliseconds) 09:46:30.612 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. [info] - SPARK-22230: last should change with new batches - state format version 1 (1 second, 317 milliseconds) 09:46:31.931 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. [info] - SPARK-22230: last should change with new batches - state format version 2 (1 second, 324 milliseconds) 09:46:33.251 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. [info] - SPARK-23004: Ensure that TypedImperativeAggregate functions do not throw errors - state format version 1 (687 milliseconds) 09:46:33.936 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. [info] - SPARK-23004: Ensure that TypedImperativeAggregate functions do not throw errors - state format version 2 (608 milliseconds) 09:46:34.563 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:46:34.574 WARN org.apache.spark.sql.execution.streaming.OffsetSeqMetadata: Conf 'spark.sql.streaming.multipleWatermarkPolicy' was not found in the offset log, using default value 'min' 09:46:34.574 WARN org.apache.spark.sql.execution.streaming.OffsetSeqMetadata: Conf 'spark.sql.streaming.flatMapGroupsWithState.stateFormatVersion' was not found in the offset log, using default value '1' 09:46:34.574 WARN org.apache.spark.sql.execution.streaming.OffsetSeqMetadata: Conf 'spark.sql.streaming.aggregation.stateFormatVersion' was not found in the offset log, using default value '1' 09:46:34.574 WARN org.apache.spark.sql.execution.streaming.OffsetSeqMetadata: Conf 'spark.sql.streaming.join.stateFormatVersion' was not found in the offset log, using default value '1' 09:46:34.574 WARN org.apache.spark.sql.execution.streaming.OffsetSeqMetadata: Conf 'spark.sql.streaming.stateStore.compression.codec' was not found in the offset log, using default value 'lz4' 09:46:34.826 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:34.827 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:34.945 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:35.030 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:35.055 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - simple count, update mode - recovery from checkpoint uses state format version 1 (1 second, 249 milliseconds) [info] - changing schema of state when restarting query - state format version 1 (1 second, 180 milliseconds) [info] - changing schema of state when restarting query - state format version 2 (1 second, 18 milliseconds) 09:46:39.027 ERROR org.apache.spark.TaskContextImpl: Error in TaskCompletionListener ExitCodeException exitCode=1: chmod: cannot access '/home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/spark-7e86d51c-b725-44d0-87a7-2cb5690800a1/state/0/0/..2.delta.ece09406-aa77-4eb6-a860-455cd950202a.TID638.tmp.crc': No such file or directory at org.apache.hadoop.util.Shell.runCommand(Shell.java:1008) at org.apache.hadoop.util.Shell.run(Shell.java:901) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1213) at org.apache.hadoop.util.Shell.execCommand(Shell.java:1307) at org.apache.hadoop.util.Shell.execCommand(Shell.java:1289) at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:867) at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.(RawLocalFileSystem.java:254) at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.(RawLocalFileSystem.java:234) at org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:333) at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:322) at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:353) at org.apache.hadoop.fs.FileSystem.primitiveCreate(FileSystem.java:1235) at org.apache.hadoop.fs.DelegateToFileSystem.createInternal(DelegateToFileSystem.java:100) at org.apache.hadoop.fs.ChecksumFs$ChecksumFSOutputSummer.(ChecksumFs.java:360) at org.apache.hadoop.fs.ChecksumFs.createInternal(ChecksumFs.java:400) at org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:607) at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:698) at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:694) at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) at org.apache.hadoop.fs.FileContext.create(FileContext.java:700) at org.apache.spark.sql.execution.streaming.FileContextBasedCheckpointFileManager.createTempFile(CheckpointFileManager.scala:316) at org.apache.spark.sql.execution.streaming.CheckpointFileManager$RenameBasedFSDataOutputStream.(CheckpointFileManager.scala:133) at org.apache.spark.sql.execution.streaming.CheckpointFileManager$RenameBasedFSDataOutputStream.(CheckpointFileManager.scala:136) at org.apache.spark.sql.execution.streaming.FileContextBasedCheckpointFileManager.createAtomic(CheckpointFileManager.scala:322) at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.deltaFileStream$lzycompute(HDFSBackedStateStoreProvider.scala:115) at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.deltaFileStream(HDFSBackedStateStoreProvider.scala:115) at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.compressedStream$lzycompute(HDFSBackedStateStoreProvider.scala:116) at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.compressedStream(HDFSBackedStateStoreProvider.scala:116) at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.abort(HDFSBackedStateStoreProvider.scala:169) at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps.$anonfun$mapPartitionsWithStateStore$2(package.scala:66) at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps.$anonfun$mapPartitionsWithStateStore$2$adapted(package.scala:65) at org.apache.spark.TaskContext$$anon$1.onTaskCompletion(TaskContext.scala:125) at org.apache.spark.TaskContextImpl.$anonfun$markTaskCompleted$1(TaskContextImpl.scala:124) at org.apache.spark.TaskContextImpl.$anonfun$markTaskCompleted$1$adapted(TaskContextImpl.scala:124) at org.apache.spark.TaskContextImpl.$anonfun$invokeListeners$1(TaskContextImpl.scala:137) at org.apache.spark.TaskContextImpl.$anonfun$invokeListeners$1$adapted(TaskContextImpl.scala:135) at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:135) at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:124) at org.apache.spark.scheduler.Task.run(Task.scala:141) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) [info] - changing schema of state when restarting query - schema check off - state format version 1 (1 second, 49 milliseconds) [info] - changing schema of state when restarting query - schema check off - state format version 2 (1 second, 18 milliseconds) 09:46:40.050 ERROR org.apache.spark.TaskContextImpl: Error in TaskCompletionListener ExitCodeException exitCode=1: chmod: cannot access '/home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/spark-4da49a82-396b-461b-b0c6-eeb2e40320f1/state/0/0/..2.delta.bb95ded6-faba-4f9a-81ba-6854cc2403c3.TID648.tmp.crc': No such file or directory at org.apache.hadoop.util.Shell.runCommand(Shell.java:1008) at org.apache.hadoop.util.Shell.run(Shell.java:901) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1213) at org.apache.hadoop.util.Shell.execCommand(Shell.java:1307) at org.apache.hadoop.util.Shell.execCommand(Shell.java:1289) at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:867) at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.(RawLocalFileSystem.java:254) at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.(RawLocalFileSystem.java:234) at org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:333) at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:322) at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:353) at org.apache.hadoop.fs.FileSystem.primitiveCreate(FileSystem.java:1235) at org.apache.hadoop.fs.DelegateToFileSystem.createInternal(DelegateToFileSystem.java:100) at org.apache.hadoop.fs.ChecksumFs$ChecksumFSOutputSummer.(ChecksumFs.java:360) at org.apache.hadoop.fs.ChecksumFs.createInternal(ChecksumFs.java:400) at org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:607) at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:698) at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:694) at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) at org.apache.hadoop.fs.FileContext.create(FileContext.java:700) at org.apache.spark.sql.execution.streaming.FileContextBasedCheckpointFileManager.createTempFile(CheckpointFileManager.scala:316) at org.apache.spark.sql.execution.streaming.CheckpointFileManager$RenameBasedFSDataOutputStream.(CheckpointFileManager.scala:133) at org.apache.spark.sql.execution.streaming.CheckpointFileManager$RenameBasedFSDataOutputStream.(CheckpointFileManager.scala:136) at org.apache.spark.sql.execution.streaming.FileContextBasedCheckpointFileManager.createAtomic(CheckpointFileManager.scala:322) at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.deltaFileStream$lzycompute(HDFSBackedStateStoreProvider.scala:115) at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.deltaFileStream(HDFSBackedStateStoreProvider.scala:115) at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.compressedStream$lzycompute(HDFSBackedStateStoreProvider.scala:116) at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.compressedStream(HDFSBackedStateStoreProvider.scala:116) at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.abort(HDFSBackedStateStoreProvider.scala:169) at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps.$anonfun$mapPartitionsWithStateStore$2(package.scala:66) at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps.$anonfun$mapPartitionsWithStateStore$2$adapted(package.scala:65) at org.apache.spark.TaskContext$$anon$1.onTaskCompletion(TaskContext.scala:125) at org.apache.spark.TaskContextImpl.$anonfun$markTaskCompleted$1(TaskContextImpl.scala:124) at org.apache.spark.TaskContextImpl.$anonfun$markTaskCompleted$1$adapted(TaskContextImpl.scala:124) at org.apache.spark.TaskContextImpl.$anonfun$invokeListeners$1(TaskContextImpl.scala:137) at org.apache.spark.TaskContextImpl.$anonfun$invokeListeners$1$adapted(TaskContextImpl.scala:135) at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:135) at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:124) at org.apache.spark.scheduler.Task.run(Task.scala:141) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:46:40.051 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 237.0 (TID 648) (192.168.10.31 executor driver): TaskKilled (Stage cancelled) 09:46:40.091 WARN org.apache.spark.sql.streaming.StreamingAggregationSuite: ===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.streaming.StreamingAggregationSuite, thread names: state-store-maintenance-task, shuffle-boss-43-1, rpc-boss-40-1 ===== [info] UIUtilsSuite: [info] - streaming query started with no batch completed (658 milliseconds) [info] - streaming query started with at least one batch completed (1 millisecond) [info] DateFunctionsSuite: [info] - function current_date (111 milliseconds) [info] - function current_timestamp and now (387 milliseconds) [info] - timestamp comparison with date strings (236 milliseconds) [info] - date comparison with date strings (240 milliseconds) [info] - date format (434 milliseconds) [info] - year (203 milliseconds) [info] - quarter (190 milliseconds) [info] - month (191 milliseconds) [info] - dayofmonth (186 milliseconds) [info] - dayofyear (173 milliseconds) [info] - hour (209 milliseconds) [info] - minute (164 milliseconds) [info] - second (155 milliseconds) [info] - weekofyear (161 milliseconds) [info] - function date_add (601 milliseconds) [info] - function date_sub (757 milliseconds) [info] - time_add (225 milliseconds) [info] - time_sub (167 milliseconds) [info] - function add_months (266 milliseconds) [info] - function months_between (644 milliseconds) [info] - function last_day (202 milliseconds) [info] - function next_day (255 milliseconds) 09:46:47.855 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 151.0 (TID 242) org.apache.spark.SparkUpgradeException: You may get a different result due to the upgrading of Spark 3.0: Fail to parse '2015-07-22 10:00:00' in the new parser. You can set spark.sql.legacy.timeParserPolicy to LEGACY to restore the behavior before Spark 3.0, or set to CORRECTED and treat it as an invalid datetime string. at org.apache.spark.sql.catalyst.util.DateTimeFormatterHelper$$anonfun$checkParsedDiff$1.applyOrElse(DateTimeFormatterHelper.scala:150) at org.apache.spark.sql.catalyst.util.DateTimeFormatterHelper$$anonfun$checkParsedDiff$1.applyOrElse(DateTimeFormatterHelper.scala:141) at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:38) at org.apache.spark.sql.catalyst.util.Iso8601TimestampFormatter.$anonfun$parse$1(TimestampFormatter.scala:86) at scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.java:23) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.sql.catalyst.util.Iso8601TimestampFormatter.parse(TimestampFormatter.scala:77) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.time.format.DateTimeParseException: Text '2015-07-22 10:00:00' could not be parsed, unparsed text found at index 10 at java.time.format.DateTimeFormatter.parseResolved0(DateTimeFormatter.java:1952) at java.time.format.DateTimeFormatter.parse(DateTimeFormatter.java:1777) at org.apache.spark.sql.catalyst.util.Iso8601TimestampFormatter.$anonfun$parse$1(TimestampFormatter.scala:78) ... 20 more 09:46:47.859 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 151.0 (TID 242) (192.168.10.31 executor driver): org.apache.spark.SparkUpgradeException: You may get a different result due to the upgrading of Spark 3.0: Fail to parse '2015-07-22 10:00:00' in the new parser. You can set spark.sql.legacy.timeParserPolicy to LEGACY to restore the behavior before Spark 3.0, or set to CORRECTED and treat it as an invalid datetime string. at org.apache.spark.sql.catalyst.util.DateTimeFormatterHelper$$anonfun$checkParsedDiff$1.applyOrElse(DateTimeFormatterHelper.scala:150) at org.apache.spark.sql.catalyst.util.DateTimeFormatterHelper$$anonfun$checkParsedDiff$1.applyOrElse(DateTimeFormatterHelper.scala:141) at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:38) at org.apache.spark.sql.catalyst.util.Iso8601TimestampFormatter.$anonfun$parse$1(TimestampFormatter.scala:86) at scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.java:23) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.sql.catalyst.util.Iso8601TimestampFormatter.parse(TimestampFormatter.scala:77) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.time.format.DateTimeParseException: Text '2015-07-22 10:00:00' could not be parsed, unparsed text found at index 10 at java.time.format.DateTimeFormatter.parseResolved0(DateTimeFormatter.java:1952) at java.time.format.DateTimeFormatter.parse(DateTimeFormatter.java:1777) at org.apache.spark.sql.catalyst.util.Iso8601TimestampFormatter.$anonfun$parse$1(TimestampFormatter.scala:78) ... 20 more 09:46:47.860 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 151.0 failed 1 times; aborting job [info] - function to_date (1 second, 107 milliseconds) [info] - function trunc (191 milliseconds) [info] - function date_trunc (805 milliseconds) [info] - unsupported fmt fields for trunc/date_trunc results null (541 milliseconds) [info] - from_unixtime (947 milliseconds) [info] - unix_timestamp (2 seconds, 591 milliseconds) [info] - to_unix_timestamp (1 second, 426 milliseconds) [info] - to_timestamp (1 second, 125 milliseconds) [info] - datediff (368 milliseconds) [info] - to_timestamp with microseconds precision (106 milliseconds) [info] - from_utc_timestamp with literal zone (213 milliseconds) [info] - from_utc_timestamp with column zone (208 milliseconds) [info] - handling null field by date_part (154 milliseconds) [info] - to_utc_timestamp with literal zone (190 milliseconds) [info] - to_utc_timestamp with column zone (190 milliseconds) 09:46:57.394 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 360.0 (TID 632) org.apache.spark.SparkUpgradeException: You may get a different result due to the upgrading of Spark 3.0: Fail to parse '2020-01-27T20:06:11.847-0800' in the new parser. You can set spark.sql.legacy.timeParserPolicy to LEGACY to restore the behavior before Spark 3.0, or set to CORRECTED and treat it as an invalid datetime string. at org.apache.spark.sql.catalyst.util.DateTimeFormatterHelper$$anonfun$checkParsedDiff$1.applyOrElse(DateTimeFormatterHelper.scala:150) at org.apache.spark.sql.catalyst.util.DateTimeFormatterHelper$$anonfun$checkParsedDiff$1.applyOrElse(DateTimeFormatterHelper.scala:141) at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:38) at org.apache.spark.sql.catalyst.util.Iso8601TimestampFormatter.$anonfun$parse$1(TimestampFormatter.scala:86) at scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.java:23) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.sql.catalyst.util.Iso8601TimestampFormatter.parse(TimestampFormatter.scala:77) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.time.format.DateTimeParseException: Text '2020-01-27T20:06:11.847-0800' could not be parsed at index 23 at java.time.format.DateTimeFormatter.parseResolved0(DateTimeFormatter.java:1949) at java.time.format.DateTimeFormatter.parse(DateTimeFormatter.java:1777) at org.apache.spark.sql.catalyst.util.Iso8601TimestampFormatter.$anonfun$parse$1(TimestampFormatter.scala:78) ... 20 more 09:46:57.397 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 360.0 (TID 632) (192.168.10.31 executor driver): org.apache.spark.SparkUpgradeException: You may get a different result due to the upgrading of Spark 3.0: Fail to parse '2020-01-27T20:06:11.847-0800' in the new parser. You can set spark.sql.legacy.timeParserPolicy to LEGACY to restore the behavior before Spark 3.0, or set to CORRECTED and treat it as an invalid datetime string. at org.apache.spark.sql.catalyst.util.DateTimeFormatterHelper$$anonfun$checkParsedDiff$1.applyOrElse(DateTimeFormatterHelper.scala:150) at org.apache.spark.sql.catalyst.util.DateTimeFormatterHelper$$anonfun$checkParsedDiff$1.applyOrElse(DateTimeFormatterHelper.scala:141) at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:38) at org.apache.spark.sql.catalyst.util.Iso8601TimestampFormatter.$anonfun$parse$1(TimestampFormatter.scala:86) at scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.java:23) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.sql.catalyst.util.Iso8601TimestampFormatter.parse(TimestampFormatter.scala:77) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.time.format.DateTimeParseException: Text '2020-01-27T20:06:11.847-0800' could not be parsed at index 23 at java.time.format.DateTimeFormatter.parseResolved0(DateTimeFormatter.java:1949) at java.time.format.DateTimeFormatter.parse(DateTimeFormatter.java:1777) at org.apache.spark.sql.catalyst.util.Iso8601TimestampFormatter.$anonfun$parse$1(TimestampFormatter.scala:78) ... 20 more 09:46:57.397 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 360.0 failed 1 times; aborting job [info] - SPARK-30668: use legacy timestamp parser in to_timestamp (190 milliseconds) [info] - SPARK-30752: convert time zones on a daylight saving day (138 milliseconds) [info] - SPARK-30766: date_trunc of old timestamps to hours and days (197 milliseconds) [info] - SPARK-30793: truncate timestamps before the epoch to seconds and minutes (192 milliseconds) 09:46:57.970 WARN org.apache.spark.sql.DateFunctionsSuite: ===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.DateFunctionsSuite, thread names: rpc-boss-46-1, shuffle-boss-49-1, QueryStageCreator-18, QueryStageCreator-17, QueryStageCreator-16 ===== [info] NestedDataSourceV1Suite: 09:46:58.227 WARN org.apache.spark.sql.execution.datasources.DataSource: Found duplicate column(s) in the data schema and the partition schema: `camelcase` 09:46:58.402 WARN org.apache.spark.sql.execution.datasources.DataSource: Found duplicate column(s) in the data schema and the partition schema: `camelcase` 09:46:58.584 WARN org.apache.spark.sql.execution.datasources.DataSource: Found duplicate column(s) in the data schema and the partition schema: `camelcase` [info] - SPARK-32431: consistent error for nested and top-level duplicate columns (1 second, 99 milliseconds) 09:46:59.156 WARN org.apache.spark.sql.NestedDataSourceV1Suite: ===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.NestedDataSourceV1Suite, thread names: shuffle-boss-55-1, rpc-boss-52-1 ===== [info] StateStoreSuite: 09:46:59.300 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:59.302 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:59.312 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:59.321 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:59.392 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:59.401 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - get, put, remove, commit, and all data iterator - with codec lz4 (227 milliseconds) 09:46:59.519 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:59.526 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:59.540 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:59.549 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:59.616 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:59.626 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - get, put, remove, commit, and all data iterator - with codec lzf (223 milliseconds) 09:46:59.737 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:59.739 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:59.748 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:59.757 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:59.826 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:59.837 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - get, put, remove, commit, and all data iterator - with codec snappy (211 milliseconds) 09:46:59.982 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:59.983 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:46:59.992 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:00.002 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:00.070 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:00.086 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - get, put, remove, commit, and all data iterator - with codec zstd (247 milliseconds) 09:47:00.203 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:00.204 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:00.214 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:00.223 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:00.288 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:00.297 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - get, put, remove, commit, and all data iterator - with codec org.apache.spark.io.LZ4CompressionCodec (211 milliseconds) 09:47:00.404 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:00.405 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:00.414 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:00.424 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:00.487 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:00.498 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - get, put, remove, commit, and all data iterator - with codec org.apache.spark.io.LZFCompressionCodec (200 milliseconds) 09:47:00.615 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:00.616 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:00.625 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:00.635 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:00.701 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:00.711 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - get, put, remove, commit, and all data iterator - with codec org.apache.spark.io.SnappyCompressionCodec (214 milliseconds) 09:47:00.833 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:00.834 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:00.844 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:00.853 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:00.921 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:00.931 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - get, put, remove, commit, and all data iterator - with codec org.apache.spark.io.ZStdCompressionCodec (219 milliseconds) 09:47:01.029 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - numKeys metrics - with codec lz4 (149 milliseconds) 09:47:01.183 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - numKeys metrics - with codec lzf (152 milliseconds) 09:47:01.328 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - numKeys metrics - with codec snappy (145 milliseconds) 09:47:01.474 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - numKeys metrics - with codec zstd (141 milliseconds) 09:47:01.617 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - numKeys metrics - with codec org.apache.spark.io.LZ4CompressionCodec (150 milliseconds) 09:47:01.771 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - numKeys metrics - with codec org.apache.spark.io.LZFCompressionCodec (148 milliseconds) 09:47:01.921 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - numKeys metrics - with codec org.apache.spark.io.SnappyCompressionCodec (145 milliseconds) 09:47:02.067 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - numKeys metrics - with codec org.apache.spark.io.ZStdCompressionCodec (147 milliseconds) [info] - removing while iterating - with codec lz4 (49 milliseconds) [info] - removing while iterating - with codec lzf (44 milliseconds) [info] - removing while iterating - with codec snappy (43 milliseconds) [info] - removing while iterating - with codec zstd (50 milliseconds) [info] - removing while iterating - with codec org.apache.spark.io.LZ4CompressionCodec (48 milliseconds) [info] - removing while iterating - with codec org.apache.spark.io.LZFCompressionCodec (51 milliseconds) [info] - removing while iterating - with codec org.apache.spark.io.SnappyCompressionCodec (38 milliseconds) [info] - removing while iterating - with codec org.apache.spark.io.ZStdCompressionCodec (48 milliseconds) [info] - abort - with codec lz4 (81 milliseconds) [info] - abort - with codec lzf (84 milliseconds) [info] - abort - with codec snappy (88 milliseconds) [info] - abort - with codec zstd (86 milliseconds) [info] - abort - with codec org.apache.spark.io.LZ4CompressionCodec (83 milliseconds) [info] - abort - with codec org.apache.spark.io.LZFCompressionCodec (86 milliseconds) [info] - abort - with codec org.apache.spark.io.SnappyCompressionCodec (87 milliseconds) [info] - abort - with codec org.apache.spark.io.ZStdCompressionCodec (86 milliseconds) 09:47:03.204 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:03.254 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:03.308 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 3 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - getStore with invalid versions - with codec lz4 (132 milliseconds) 09:47:03.335 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:03.390 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:03.438 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 3 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - getStore with invalid versions - with codec lzf (130 milliseconds) 09:47:03.463 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:03.510 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:03.559 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 3 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - getStore with invalid versions - with codec snappy (120 milliseconds) 09:47:03.582 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:03.631 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:03.680 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 3 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - getStore with invalid versions - with codec zstd (121 milliseconds) 09:47:03.707 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:03.758 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:03.807 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 3 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - getStore with invalid versions - with codec org.apache.spark.io.LZ4CompressionCodec (125 milliseconds) 09:47:03.833 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:03.885 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:03.932 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 3 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - getStore with invalid versions - with codec org.apache.spark.io.LZFCompressionCodec (125 milliseconds) 09:47:03.958 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:04.009 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:04.059 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 3 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - getStore with invalid versions - with codec org.apache.spark.io.SnappyCompressionCodec (126 milliseconds) 09:47:04.082 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:04.133 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:04.183 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 3 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - getStore with invalid versions - with codec org.apache.spark.io.ZStdCompressionCodec (123 milliseconds) 09:47:04.273 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:04.335 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - two concurrent StateStores - one for read-only and one for read-write - with codec lz4 (152 milliseconds) 09:47:04.420 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:04.488 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - two concurrent StateStores - one for read-only and one for read-write - with codec lzf (153 milliseconds) 09:47:04.578 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:04.643 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - two concurrent StateStores - one for read-only and one for read-write - with codec snappy (153 milliseconds) 09:47:04.730 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:04.796 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - two concurrent StateStores - one for read-only and one for read-write - with codec zstd (152 milliseconds) 09:47:04.883 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:04.953 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - two concurrent StateStores - one for read-only and one for read-write - with codec org.apache.spark.io.LZ4CompressionCodec (156 milliseconds) 09:47:05.036 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:05.096 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - two concurrent StateStores - one for read-only and one for read-write - with codec org.apache.spark.io.LZFCompressionCodec (142 milliseconds) 09:47:05.181 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:05.244 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - two concurrent StateStores - one for read-only and one for read-write - with codec org.apache.spark.io.SnappyCompressionCodec (147 milliseconds) 09:47:05.325 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:05.388 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - two concurrent StateStores - one for read-only and one for read-write - with codec org.apache.spark.io.ZStdCompressionCodec (144 milliseconds) 09:47:05.470 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:05.533 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:05.595 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 3 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - retaining only two latest versions when MAX_BATCHES_TO_RETAIN_IN_MEMORY set to 2 (206 milliseconds) 09:47:05.679 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:05.742 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:05.743 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:05.811 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - failure after committing with MAX_BATCHES_TO_RETAIN_IN_MEMORY set to 1 (215 milliseconds) 09:47:05.902 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:05.902 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:05.970 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - no cache data with MAX_BATCHES_TO_RETAIN_IN_MEMORY set to 0 (158 milliseconds) 09:47:06.106 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:06.117 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:06.346 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 6 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:06.420 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 6 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:06.430 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 6 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:07.148 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 20 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:07.228 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 20 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - snapshotting (1 second, 257 milliseconds) 09:47:08.462 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 20 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:08.472 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 19 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - cleaning (1 second, 244 milliseconds) [info] - SPARK-19677: Committing a delta file atop an existing one should not fail on HDFS (56 milliseconds) 09:47:08.973 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 6 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:08.983 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 6 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:08.992 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 5 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:09.001 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 5 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:09.011 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 5 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - corrupted file handling (481 milliseconds) [info] - reports memory usage (68 milliseconds) [info] - reports memory usage on current version (71 milliseconds) [info] - StateStore.get (180 milliseconds) [info] - maintenance (2 seconds, 261 milliseconds) [info] - SPARK-18342: commit fails when rename fails (40 milliseconds) [info] - SPARK-18416: do not create temp delta file until the store is updated (239 milliseconds) 09:47:11.941 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:47:12.663 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:47:12.839 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - SPARK-21145: Restarted queries create new provider instances (1 second, 216 milliseconds) [info] - error writing [version].delta cancels the output stream (173 milliseconds) 09:47:13.430 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 1 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. 09:47:13.431 WARN org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider: The state for version 2 doesn't exist in loadedMaps. Reading snapshot file and delta files if needed...Note that this is normal for the first batch of starting query. [info] - expose metrics with custom metrics to StateStoreMetrics (164 milliseconds) 09:47:13.434 WARN org.apache.spark.sql.execution.streaming.state.StateStoreSuite: ===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.execution.streaming.state.StateStoreSuite, thread names: shuffle-boss-61-1, rpc-boss-58-1, shuffle-boss-67-1, rpc-boss-64-1 ===== 09:47:13.441 WARN org.apache.spark.sql.SparkSession: An existing Spark session exists as the active or default session. This probably means another suite leaked it. Attempting to stop it before continuing. This existing Spark session was created at: org.apache.spark.sql.execution.streaming.state.StateStoreSuite.$anonfun$new$54(StateStoreSuite.scala:582) org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85) org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83) org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104) org.scalatest.Transformer.apply(Transformer.scala:22) org.scalatest.Transformer.apply(Transformer.scala:20) org.scalatest.funsuite.AnyFunSuiteLike$$anon$1.apply(AnyFunSuiteLike.scala:190) org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:176) org.scalatest.funsuite.AnyFunSuiteLike.invokeWithFixture$1(AnyFunSuiteLike.scala:188) org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTest$1(AnyFunSuiteLike.scala:200) org.scalatest.SuperEngine.runTestImpl(Engine.scala:306) org.scalatest.funsuite.AnyFunSuiteLike.runTest(AnyFunSuiteLike.scala:200) org.scalatest.funsuite.AnyFunSuiteLike.runTest$(AnyFunSuiteLike.scala:182) org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:61) org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:234) org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:227) org.apache.spark.sql.execution.streaming.state.StateStoreSuite.org$scalatest$BeforeAndAfter$$super$runTest(StateStoreSuite.scala:49) org.scalatest.BeforeAndAfter.runTest(BeforeAndAfter.scala:213) org.scalatest.BeforeAndAfter.runTest$(BeforeAndAfter.scala:203) org.apache.spark.sql.execution.streaming.state.StateStoreSuite.runTest(StateStoreSuite.scala:49) [info] ForeachBatchSinkSuite: 09:47:13.501 WARN org.apache.spark.sql.streaming.StreamingQueryManager: Temporary checkpoint location created which is deleted normally when the query didn't fail: /home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/temporary-94d1e263-8866-470c-bd51-39029c506372. If it's required to delete it under any circumstances, please set spark.sql.streaming.forceDeleteTempCheckpointLocation to true. Important to know deleting temp checkpoint folder is best effort. 09:47:13.501 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. [info] - foreachBatch with non-stateful query (610 milliseconds) 09:47:14.114 WARN org.apache.spark.sql.streaming.StreamingQueryManager: Temporary checkpoint location created which is deleted normally when the query didn't fail: /home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/temporary-7f21d90f-a2f3-429b-845a-4765c779a0d0. If it's required to delete it under any circumstances, please set spark.sql.streaming.forceDeleteTempCheckpointLocation to true. Important to know deleting temp checkpoint folder is best effort. 09:47:14.114 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. [info] - foreachBatch with stateful query in update mode (2 seconds, 25 milliseconds) 09:47:16.126 WARN org.apache.spark.sql.streaming.StreamingQueryManager: Temporary checkpoint location created which is deleted normally when the query didn't fail: /home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/temporary-9472a8d7-3de0-44af-a90d-e8d3f2943ffb. If it's required to delete it under any circumstances, please set spark.sql.streaming.forceDeleteTempCheckpointLocation to true. Important to know deleting temp checkpoint folder is best effort. 09:47:16.126 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. [info] - group-by-filter.sql (1 minute, 29 seconds) [info] - foreachBatch with stateful query in complete mode (1 second, 887 milliseconds) 09:47:18.010 WARN org.apache.spark.sql.streaming.StreamingQueryManager: Temporary checkpoint location created which is deleted normally when the query didn't fail: /home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/temporary-37ecacc0-2ba8-4edb-9a0f-35297d0c038c. If it's required to delete it under any circumstances, please set spark.sql.streaming.forceDeleteTempCheckpointLocation to true. Important to know deleting temp checkpoint folder is best effort. 09:47:18.010 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. [info] - foreachBatchSink does not affect metric generation (309 milliseconds) [info] - throws errors in invalid situations (8 milliseconds) 09:47:18.355 WARN org.apache.spark.sql.execution.streaming.sources.ForeachBatchSinkSuite: ===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.execution.streaming.sources.ForeachBatchSinkSuite, thread names: state-store-maintenance-task, shuffle-boss-73-1, rpc-boss-70-1 ===== [info] JDBCV2Suite: [info] - simple scan (466 milliseconds) [info] - scan with filter push-down (106 milliseconds) [info] - scan with column pruning (105 milliseconds) [info] - scan with filter push-down and column pruning (108 milliseconds) [info] - read/write with partition info (429 milliseconds) [info] - null-handling.sql (2 seconds, 748 milliseconds) [info] - show tables (99 milliseconds) [info] - SQL API: create table as select (196 milliseconds) [info] - DataFrameWriterV2: create table as select (195 milliseconds) [info] - SQL API: replace table as select (422 milliseconds) [info] - DataFrameWriterV2: replace table as select (410 milliseconds) [info] - SQL API: insert and overwrite (424 milliseconds) [info] - DataFrameWriterV2: insert and overwrite (359 milliseconds) 09:47:21.992 WARN org.apache.spark.sql.jdbc.JDBCV2Suite: ===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.jdbc.JDBCV2Suite, thread names: QueryStageCreator-19, MVStore background writer nio:/home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/spark-ff028eb7-d322-4e38-aae4-1c46cfccff05.mv.db, rpc-boss-76-1, shuffle-boss-79-1 ===== [info] ContinuousQueryStatusAndProgressSuite: 09:47:22.074 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:47:22.503 WARN org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor: Current batch is falling behind. The trigger interval is 100 milliseconds, but spent 201 milliseconds 09:47:22.615 WARN org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor: Current batch is falling behind. The trigger interval is 100 milliseconds, but spent 112 milliseconds 09:47:22.952 WARN org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor: Current batch is falling behind. The trigger interval is 100 milliseconds, but spent 151 milliseconds 09:47:22.954 ERROR org.apache.spark.util.Utils: Aborting task org.apache.spark.SparkException: Could not find EpochCoordinator-c20329b4-cef1-4b9c-8a24-27d0e1a64bc5--ac3f32f7-81a1-4854-9f22-b872940d0eb6. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:178) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:150) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:193) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:564) at org.apache.spark.sql.execution.streaming.continuous.ContinuousQueuedDataReader.next(ContinuousQueuedDataReader.scala:116) at org.apache.spark.sql.execution.streaming.continuous.ContinuousDataSourceRDD$$anon$1.getNext(ContinuousDataSourceRDD.scala:93) at org.apache.spark.sql.execution.streaming.continuous.ContinuousDataSourceRDD$$anon$1.getNext(ContinuousDataSourceRDD.scala:91) at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD.$anonfun$compute$1(ContinuousWriteRDD.scala:58) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1471) at org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD.compute(ContinuousWriteRDD.scala:84) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:47:22.954 ERROR org.apache.spark.util.Utils: Aborting task org.apache.spark.SparkException: Could not find EpochCoordinator-c20329b4-cef1-4b9c-8a24-27d0e1a64bc5--ac3f32f7-81a1-4854-9f22-b872940d0eb6. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:178) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:150) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:193) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:564) at org.apache.spark.sql.execution.streaming.continuous.ContinuousQueuedDataReader.next(ContinuousQueuedDataReader.scala:116) at org.apache.spark.sql.execution.streaming.continuous.ContinuousDataSourceRDD$$anon$1.getNext(ContinuousDataSourceRDD.scala:93) at org.apache.spark.sql.execution.streaming.continuous.ContinuousDataSourceRDD$$anon$1.getNext(ContinuousDataSourceRDD.scala:91) at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD.$anonfun$compute$1(ContinuousWriteRDD.scala:58) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1471) at org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD.compute(ContinuousWriteRDD.scala:84) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:47:22.956 ERROR org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD: Writer for partition 1 is aborting. 09:47:22.957 ERROR org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD: Writer for partition 0 is aborting. 09:47:22.957 ERROR org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD: Writer for partition 0 aborted. 09:47:22.957 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:47:22.957 ERROR org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD: Writer for partition 1 aborted. 09:47:22.961 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0) (192.168.10.31 executor driver): TaskKilled (Stage cancelled) 09:47:22.961 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1) (192.168.10.31 executor driver): TaskKilled (Stage cancelled) 09:47:23.248 WARN org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor: Current batch is falling behind. The trigger interval is 100 milliseconds, but spent 148 milliseconds 09:47:23.590 WARN org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor: Current batch is falling behind. The trigger interval is 100 milliseconds, but spent 190 milliseconds 09:47:23.923 WARN org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor: Current batch is falling behind. The trigger interval is 100 milliseconds, but spent 122 milliseconds 09:47:24.109 ERROR org.apache.spark.util.Utils: Aborting task org.apache.spark.SparkException: Could not find EpochCoordinator-cb06a18b-5257-4152-9a4d-1433f5283799--18cb1327-f32f-4d68-a3c0-cf48fdab6f96. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:178) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:150) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:193) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:564) at org.apache.spark.sql.execution.streaming.continuous.ContinuousQueuedDataReader.next(ContinuousQueuedDataReader.scala:116) at org.apache.spark.sql.execution.streaming.continuous.ContinuousDataSourceRDD$$anon$1.getNext(ContinuousDataSourceRDD.scala:93) at org.apache.spark.sql.execution.streaming.continuous.ContinuousDataSourceRDD$$anon$1.getNext(ContinuousDataSourceRDD.scala:91) at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD.$anonfun$compute$1(ContinuousWriteRDD.scala:58) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1471) at org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD.compute(ContinuousWriteRDD.scala:84) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:47:24.109 ERROR org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD: Writer for partition 1 is aborting. 09:47:24.109 ERROR org.apache.spark.util.Utils: Aborting task org.apache.spark.SparkException: Could not find EpochCoordinator-cb06a18b-5257-4152-9a4d-1433f5283799--18cb1327-f32f-4d68-a3c0-cf48fdab6f96. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:178) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:150) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:193) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:564) at org.apache.spark.sql.execution.streaming.continuous.ContinuousQueuedDataReader.next(ContinuousQueuedDataReader.scala:116) at org.apache.spark.sql.execution.streaming.continuous.ContinuousDataSourceRDD$$anon$1.getNext(ContinuousDataSourceRDD.scala:93) at org.apache.spark.sql.execution.streaming.continuous.ContinuousDataSourceRDD$$anon$1.getNext(ContinuousDataSourceRDD.scala:91) at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD.$anonfun$compute$1(ContinuousWriteRDD.scala:58) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1471) at org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD.compute(ContinuousWriteRDD.scala:84) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:47:24.109 ERROR org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD: Writer for partition 1 aborted. 09:47:24.111 ERROR org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD: Writer for partition 0 is aborting. 09:47:24.111 ERROR org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD: Writer for partition 0 aborted. [info] - StreamingQueryStatus - ContinuousExecution isDataAvailable and isTriggerActive should be false (2 seconds, 68 milliseconds) 09:47:24.113 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 1.0 (TID 3) (192.168.10.31 executor driver): TaskKilled (Stage cancelled) 09:47:24.113 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 1.0 (TID 2) (192.168.10.31 executor driver): TaskKilled (Stage cancelled) 09:47:24.145 WARN org.apache.spark.sql.streaming.continuous.ContinuousQueryStatusAndProgressSuite: ===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.streaming.continuous.ContinuousQueryStatusAndProgressSuite, thread names: rpc-boss-82-1, shuffle-boss-85-1 ===== [info] BucketedReadWithoutHiveSupportSuite: [info] - read bucketed data (909 milliseconds) [info] - except-all.sql (5 seconds, 871 milliseconds) [info] - like-any.sql (732 milliseconds) [info] - change-column.sql (200 milliseconds) [info] - table-aliases.sql (479 milliseconds) [info] - read partitioning bucketed tables with bucket pruning filters (3 seconds, 215 milliseconds) [info] - read non-partitioning bucketed tables with bucket pruning filters (983 milliseconds) [info] - inner-join.sql (2 seconds, 782 milliseconds) [info] - like-all.sql (569 milliseconds) [info] - read partitioning bucketed tables having null in bucketing key (2 seconds, 163 milliseconds) [info] - bucket pruning support IsNaN (1 second, 121 milliseconds) [info] - postgreSQL/float8.sql (4 seconds, 695 milliseconds) [info] - read partitioning bucketed tables having composite filters (3 seconds, 587 milliseconds) [info] - read bucketed table without filters (769 milliseconds) [info] - postgreSQL/select.sql (3 seconds, 606 milliseconds) [info] - avoid shuffle when join 2 bucketed tables (4 seconds, 109 milliseconds) [info] - avoid shuffle when join keys are a super-set of bucket keys !!! IGNORED !!! [info] - only shuffle one side when join bucketed table and non-bucketed table (3 seconds, 350 milliseconds) [info] - postgreSQL/select_implicit.sql (7 seconds, 746 milliseconds) [info] - only shuffle one side when 2 bucketed tables have different bucket number (3 seconds, 613 milliseconds) [info] - postgreSQL/strings.sql (3 seconds, 840 milliseconds) [info] - only shuffle one side when 2 bucketed tables have different bucket keys (3 seconds, 175 milliseconds) [info] - shuffle when join keys are not equal to bucket keys (2 seconds, 801 milliseconds) [info] - postgreSQL/with.sql (3 seconds, 863 milliseconds) [info] - shuffle when join 2 bucketed tables with bucketing disabled (3 seconds, 48 milliseconds) [info] - postgreSQL/timestamp.sql (2 seconds, 858 milliseconds) [info] - postgreSQL/comments.sql (180 milliseconds) [info] - postgreSQL/interval.sql (432 milliseconds) [info] - postgreSQL/float4.sql (2 seconds, 270 milliseconds) [info] - postgreSQL/select_distinct.sql (1 second, 947 milliseconds) [info] - postgreSQL/create_view.sql (2 seconds, 617 milliseconds) [info] - check sort and shuffle when bucket and sort columns are join keys (14 seconds, 838 milliseconds) [info] - avoid shuffle and sort when sort columns are a super set of join keys (1 second, 399 milliseconds) [info] - postgreSQL/aggregates_part2.sql (9 seconds, 711 milliseconds) [info] - only sort one side when sort columns are different (2 seconds, 58 milliseconds) 09:48:16.372 ERROR org.apache.spark.executor.Executor: Exception in task 1.0 in stage 5757.0 (TID 7023) java.lang.ArithmeticException: integer overflow at java.lang.Math.multiplyExact(Math.java:867) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:48:16.373 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 5757.0 (TID 7022) java.lang.ArithmeticException: integer overflow at java.lang.Math.multiplyExact(Math.java:867) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:48:16.375 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 5757.0 (TID 7023) (192.168.10.31 executor driver): java.lang.ArithmeticException: integer overflow at java.lang.Math.multiplyExact(Math.java:867) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:48:16.375 ERROR org.apache.spark.scheduler.TaskSetManager: Task 1 in stage 5757.0 failed 1 times; aborting job 09:48:16.504 ERROR org.apache.spark.executor.Executor: Exception in task 1.0 in stage 5759.0 (TID 7027) java.lang.ArithmeticException: integer overflow at java.lang.Math.multiplyExact(Math.java:867) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:48:16.505 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 5759.0 (TID 7027) (192.168.10.31 executor driver): java.lang.ArithmeticException: integer overflow at java.lang.Math.multiplyExact(Math.java:867) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:48:16.506 ERROR org.apache.spark.scheduler.TaskSetManager: Task 1 in stage 5759.0 failed 1 times; aborting job 09:48:16.507 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 5759.0 (TID 7026) java.lang.ArithmeticException: integer overflow at java.lang.Math.multiplyExact(Math.java:867) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:48:16.645 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 5761.0 (TID 7030) java.lang.ArithmeticException: integer overflow at java.lang.Math.addExact(Math.java:790) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:48:16.646 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 5761.0 (TID 7030) (192.168.10.31 executor driver): java.lang.ArithmeticException: integer overflow at java.lang.Math.addExact(Math.java:790) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:48:16.647 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 5761.0 failed 1 times; aborting job 09:48:16.771 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 5763.0 (TID 7034) java.lang.ArithmeticException: integer overflow at java.lang.Math.addExact(Math.java:790) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:48:16.773 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 5763.0 (TID 7034) (192.168.10.31 executor driver): java.lang.ArithmeticException: integer overflow at java.lang.Math.addExact(Math.java:790) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:48:16.773 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 5763.0 failed 1 times; aborting job 09:48:16.901 ERROR org.apache.spark.executor.Executor: Exception in task 1.0 in stage 5765.0 (TID 7039) java.lang.ArithmeticException: integer overflow at java.lang.Math.subtractExact(Math.java:829) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:48:16.903 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 5765.0 (TID 7039) (192.168.10.31 executor driver): java.lang.ArithmeticException: integer overflow at java.lang.Math.subtractExact(Math.java:829) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:48:16.903 ERROR org.apache.spark.scheduler.TaskSetManager: Task 1 in stage 5765.0 failed 1 times; aborting job 09:48:17.024 ERROR org.apache.spark.executor.Executor: Exception in task 1.0 in stage 5767.0 (TID 7043) java.lang.ArithmeticException: integer overflow at java.lang.Math.subtractExact(Math.java:829) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:48:17.026 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 5767.0 (TID 7043) (192.168.10.31 executor driver): java.lang.ArithmeticException: integer overflow at java.lang.Math.subtractExact(Math.java:829) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:48:17.026 ERROR org.apache.spark.scheduler.TaskSetManager: Task 1 in stage 5767.0 failed 1 times; aborting job 09:48:17.032 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 5767.0 (TID 7042) (192.168.10.31 executor driver): TaskKilled (Stage cancelled) [info] - only sort one side when sort columns are same but their ordering is different (2 seconds, 15 milliseconds) [info] - postgreSQL/int4.sql (3 seconds, 267 milliseconds) [info] - avoid shuffle when grouping keys are equal to bucket keys (1 second, 580 milliseconds) [info] - sort should not be introduced when aliases are used (285 milliseconds) [info] - bucket join should work with SubqueryAlias plan (421 milliseconds) 09:48:19.999 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:20.132 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:20.203 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:20.203 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - avoid shuffle when grouping keys are a super-set of bucket keys (1 second, 275 milliseconds) [info] - SPARK-17698 Join predicates should not contain filter clauses (1 second, 426 milliseconds) 09:48:28.817 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:28.936 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:29.007 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:29.007 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - postgreSQL/groupingsets.sql (12 seconds, 616 milliseconds) [info] - postgreSQL/limit.sql (637 milliseconds) [info] - SPARK-19122 Re-order join predicates if they match with the child's output partitioning (9 seconds, 793 milliseconds) [info] - postgreSQL/case.sql (2 seconds, 948 milliseconds) [info] - postgreSQL/int2.sql (2 seconds, 395 milliseconds) 09:48:36.572 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:36.573 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:36.602 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:36.602 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:36.743 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:36.743 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:36.757 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:36.757 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:36.840 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:36.840 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:36.855 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:36.855 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:36.935 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:36.935 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:36.963 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:36.963 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:37.070 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:37.070 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:37.084 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:37.085 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:37.195 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:37.195 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:37.209 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:37.209 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:37.285 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:37.286 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:37.300 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:37.300 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:37.381 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:37.381 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:37.396 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:37.397 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:37.503 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:37.503 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:37.531 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:37.531 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:37.657 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:37.657 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:37.672 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:37.672 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:37.773 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:37.773 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:37.794 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:37.795 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:37.898 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:37.898 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:37.921 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:37.921 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - SPARK-19122 No re-ordering should happen if set of join columns != set of child's partitioning columns (5 seconds, 826 milliseconds) 09:48:38.039 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.039 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.054 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.054 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.139 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.139 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.155 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.155 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.235 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.235 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.271 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.271 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.350 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.350 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.366 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.366 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.463 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.463 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.479 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.479 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.569 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.569 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.585 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.585 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.670 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.671 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.687 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.687 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.774 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.774 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.790 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.790 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.883 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.884 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.900 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.900 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.984 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:38.984 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.000 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.000 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.086 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.087 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.102 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.102 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.190 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.190 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.213 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.213 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.329 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.329 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.345 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.345 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.430 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.430 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.445 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.445 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.529 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.530 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.545 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.545 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.631 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.631 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.646 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.646 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.742 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.742 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.757 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.758 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.844 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.844 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.860 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.860 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - SPARK-22042 ReorderJoinPredicates can break when child's partitioning is not decided (1 second, 898 milliseconds) 09:48:39.948 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.948 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.964 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:39.964 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.050 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.050 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.066 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.067 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.155 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.155 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.171 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.171 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.258 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.258 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.275 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.275 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.363 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.363 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.378 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.379 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - error if there exists any malformed bucket files (491 milliseconds) 09:48:40.460 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.460 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.474 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.475 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.561 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.561 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.578 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.578 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.668 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.668 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.684 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.685 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.814 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.814 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.830 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.830 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.922 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.922 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.937 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:40.938 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.037 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.037 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.053 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.053 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.128 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.128 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.143 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.143 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.226 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.226 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.241 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.241 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.325 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.326 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.340 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.341 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.416 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.417 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - disable bucketing when the output doesn't contain all bucketing columns (1 second, 40 milliseconds) [info] - SPARK-27100 stack overflow: read data with large partitions !!! IGNORED !!! 09:48:41.432 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.432 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.509 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.509 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.524 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.524 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.609 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.610 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.632 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.633 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.714 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.715 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.730 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.730 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.812 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.812 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.834 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.835 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.911 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.911 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.936 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:41.936 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.025 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.026 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.043 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.043 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.127 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.127 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.142 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.143 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.230 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.230 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.246 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.246 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.327 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.328 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.343 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.343 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.427 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.427 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.443 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.443 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.526 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.526 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.541 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.541 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.625 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.625 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.641 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.641 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.723 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.723 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.740 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.740 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.824 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.825 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.840 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.840 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.921 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.921 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.937 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:42.937 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.020 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.021 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.036 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.037 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.125 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.125 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.155 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.156 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.246 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.247 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.265 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.266 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.357 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.358 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.377 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.378 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.467 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.467 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.484 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.485 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.573 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.573 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.592 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.592 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.683 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.683 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.703 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.704 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.795 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.796 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.816 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.816 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.906 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.906 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.941 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:43.941 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.027 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.027 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.045 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.045 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.131 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.132 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.148 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.148 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.231 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.232 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.249 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.249 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.334 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.334 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.353 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.353 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.462 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.462 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.481 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.481 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.548 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.548 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.564 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.564 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.635 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.635 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.650 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.650 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.755 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.756 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.771 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.771 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.841 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.841 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.856 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.856 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.927 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.927 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.942 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:44.942 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.016 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.017 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.031 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.031 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.103 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.103 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.117 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.118 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.189 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.190 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.205 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.205 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.300 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.300 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.317 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.317 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.396 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.396 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.412 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.412 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.492 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.492 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.510 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.510 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.586 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.587 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.602 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.602 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.680 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.680 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.695 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.695 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.783 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.784 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.801 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.801 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.890 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.891 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.909 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:45.909 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.000 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.001 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.017 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.018 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.108 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.109 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.126 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.126 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.215 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.215 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.234 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.235 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.325 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.325 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.343 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.343 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.438 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.438 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.457 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.457 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.542 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.542 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.560 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.560 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.647 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.648 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.667 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.667 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.756 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.756 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.792 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.792 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.887 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.888 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.905 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.905 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.991 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:46.991 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.008 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.009 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.102 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.103 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.121 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.122 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.217 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.217 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.234 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.235 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.322 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.322 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.340 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.340 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.425 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.425 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.442 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.442 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.527 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.527 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.544 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.545 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - SPARK-29655 Read bucketed tables obeys spark.sql.shuffle.partitions (6 seconds, 152 milliseconds) 09:48:47.636 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.637 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.653 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.654 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.736 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.736 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.753 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.753 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.838 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.839 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.856 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.856 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.943 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.943 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.959 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:47.959 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:48.040 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:48.041 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:48.057 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:48.058 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:48.148 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:48.148 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:48.164 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:48.164 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:48.244 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:48.244 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:48.260 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:48.260 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:48.345 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:48.345 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:48.362 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:48.363 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:48.449 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:48.449 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:48.467 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:48.468 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:48.555 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:48.555 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:48.572 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:48:48.573 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - postgreSQL/window_part4.sql (12 seconds, 135 milliseconds) [info] - postgreSQL/date.sql (3 seconds, 248 milliseconds) [info] - SPARK-32767 Bucket join should work if SHUFFLE_PARTITIONS larger than bucket number (9 seconds, 273 milliseconds) [info] - bucket coalescing eliminates shuffle (3 seconds, 748 milliseconds) [info] - bucket coalescing is not satisfied (21 seconds, 977 milliseconds) [info] - bucket coalescing is applied when join expressions match with partitioning expressions (714 milliseconds) 09:49:23.333 WARN org.apache.spark.sql.sources.BucketedReadWithoutHiveSupportSuite: ===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.sources.BucketedReadWithoutHiveSupportSuite, thread names: QueryStageCreator-21, block-manager-storage-async-thread-pool-7, block-manager-storage-async-thread-pool-89, QueryStageCreator-26, block-manager-storage-async-thread-pool-84, block-manager-storage-async-thread-pool-68, block-manager-storage-async-thread-pool-6, QueryStageCreator-25, QueryStageCreator-22, QueryStageCreator-29, block-manager-storage-async-thread-pool-62, block-manager-storage-async-thread-pool-67, block-manager-storage-async-thread-pool-24, block-manager-storage-async-thread-pool-13, block-manager-storage-async-thread-pool-94, shuffle-boss-91-1, QueryStageCreator-23, QueryStageCreator-24, QueryStageCreator-31, block-manager-storage-async-thread-pool-82, block-manager-storage-async-thread-pool-1, QueryStageCreator-28, block-manager-storage-async-thread-pool-88, QueryStageCreator-27, QueryStageCreator-30, QueryStageCreator-20, block-manager-storage-async-thread-pool-45, rpc-boss-88-1, block-manager-storage-async-thread-pool-43, block-manager-storage-async-thread-pool-58 ===== [info] V2SessionCatalogTableSuite: [info] - listTables (22 milliseconds) [info] - createTable (9 milliseconds) [info] - createTable: with properties (6 milliseconds) [info] - createTable: table already exists (8 milliseconds) [info] - createTable: location (8 milliseconds) [info] - tableExists (6 milliseconds) [info] - loadTable (6 milliseconds) [info] - loadTable: table does not exist (2 milliseconds) [info] - invalidateTable (6 milliseconds) [info] - invalidateTable: table does not exist (1 millisecond) [info] - alterTable: add property (11 milliseconds) [info] - alterTable: add property to existing (6 milliseconds) [info] - alterTable: remove existing property (5 milliseconds) [info] - alterTable: remove missing property (6 milliseconds) [info] - alterTable: add top-level column (6 milliseconds) [info] - alterTable: add required column (7 milliseconds) [info] - alterTable: add column with comment (7 milliseconds) [info] - alterTable: add nested column (9 milliseconds) [info] - alterTable: add column to primitive field fails (7 milliseconds) [info] - alterTable: add field to missing column fails (6 milliseconds) [info] - alterTable: update column data type (5 milliseconds) [info] - alterTable: update column nullability (6 milliseconds) [info] - alterTable: update missing column fails (7 milliseconds) [info] - alterTable: add comment (8 milliseconds) [info] - alterTable: replace comment (7 milliseconds) [info] - alterTable: add comment to missing column fails (7 milliseconds) [info] - alterTable: rename top-level column (8 milliseconds) [info] - alterTable: rename nested column (6 milliseconds) [info] - alterTable: rename struct column (5 milliseconds) [info] - alterTable: rename missing column fails (7 milliseconds) [info] - alterTable: multiple changes (7 milliseconds) [info] - alterTable: delete top-level column (7 milliseconds) [info] - alterTable: delete nested column (7 milliseconds) [info] - alterTable: delete missing column fails (7 milliseconds) [info] - alterTable: delete missing nested column fails (7 milliseconds) [info] - alterTable: table does not exist (1 millisecond) [info] - alterTable: location (7 milliseconds) [info] - dropTable (2 milliseconds) [info] - dropTable: table does not exist (0 milliseconds) [info] - renameTable (13 milliseconds) [info] - renameTable: fail if table does not exist (1 millisecond) [info] - renameTable: fail if new table name already exists (13 milliseconds) [info] - renameTable: fail if db does not match for old and new table names (8 milliseconds) 09:49:23.794 WARN org.apache.spark.sql.execution.datasources.v2.V2SessionCatalogTableSuite: ===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.execution.datasources.v2.V2SessionCatalogTableSuite, thread names: rpc-boss-94-1, shuffle-boss-97-1 ===== [info] BasicWriteTaskStatsTrackerSuite: [info] - No files in run (1 millisecond) [info] - Missing File (2 milliseconds) [info] - Empty filename is forwarded (1 millisecond) [info] - Null filename is only picked up in final status (1 millisecond) [info] - 0 byte file (12 milliseconds) [info] - File with data (12 milliseconds) [info] - Open file (12 milliseconds) [info] - Two files (22 milliseconds) [info] - Three files, last one empty (34 milliseconds) [info] - Three files, one not found (28 milliseconds) [info] ArrowConvertersSuite: [info] - collect to arrow record batch (434 milliseconds) [info] - short conversion (131 milliseconds) [info] - int conversion (81 milliseconds) [info] - long conversion (87 milliseconds) [info] - float conversion (102 milliseconds) [info] - double conversion (88 milliseconds) [info] - decimal conversion (97 milliseconds) [info] - index conversion (46 milliseconds) [info] - mixed numeric type conversion (107 milliseconds) [info] - string type conversion (117 milliseconds) [info] - boolean type conversion (72 milliseconds) [info] - byte type conversion (68 milliseconds) [info] - binary type conversion (76 milliseconds) [info] - date type conversion (85 milliseconds) [info] - timestamp type conversion (85 milliseconds) [info] - floating-point NaN (79 milliseconds) [info] - array type conversion (150 milliseconds) [info] - struct type conversion (148 milliseconds) [info] - partitioned DataFrame (81 milliseconds) [info] - empty frame collect (58 milliseconds) [info] - empty partition collect (48 milliseconds) [info] - max records in batch conf (45 milliseconds) [info] - interval is unsupported for arrow (64 milliseconds) [info] - test Arrow Validator (216 milliseconds) [info] - roundtrip arrow batches (19 milliseconds) [info] - ArrowBatchStreamWriter roundtrip (15 milliseconds) 09:49:26.641 WARN org.apache.spark.sql.execution.arrow.ArrowConvertersSuite: ===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.execution.arrow.ArrowConvertersSuite, thread names: shuffle-boss-103-1, rpc-boss-100-1 ===== [info] PythonForeachWriterSuite: Exception in thread "Thread-17366" java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2173) at org.apache.spark.sql.execution.python.PythonForeachWriter$UnsafeRowBuffer.$anonfun$remove$1(PythonForeachWriter.scala:123) at org.apache.spark.sql.execution.python.PythonForeachWriter$UnsafeRowBuffer.withLock(PythonForeachWriter.scala:150) at org.apache.spark.sql.execution.python.PythonForeachWriter$UnsafeRowBuffer.org$apache$spark$sql$execution$python$PythonForeachWriter$UnsafeRowBuffer$$remove(PythonForeachWriter.scala:121) at org.apache.spark.sql.execution.python.PythonForeachWriter$UnsafeRowBuffer$$anon$1.getNext(PythonForeachWriter.scala:106) at org.apache.spark.sql.execution.python.PythonForeachWriter$UnsafeRowBuffer$$anon$1.getNext(PythonForeachWriter.scala:104) at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73) at org.apache.spark.sql.execution.python.PythonForeachWriterSuite$BufferTester$$anon$1.run(PythonForeachWriterSuite.scala:105) [info] - UnsafeRowBuffer: iterator blocks when no data is available (159 milliseconds) [info] - UnsafeRowBuffer: iterator unblocks when all data added (6 milliseconds) Exception in thread "Thread-17368" java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2173) at org.apache.spark.sql.execution.python.PythonForeachWriter$UnsafeRowBuffer.$anonfun$remove$1(PythonForeachWriter.scala:123) at org.apache.spark.sql.execution.python.PythonForeachWriter$UnsafeRowBuffer.withLock(PythonForeachWriter.scala:150) at org.apache.spark.sql.execution.python.PythonForeachWriter$UnsafeRowBuffer.org$apache$spark$sql$execution$python$PythonForeachWriter$UnsafeRowBuffer$$remove(PythonForeachWriter.scala:121) at org.apache.spark.sql.execution.python.PythonForeachWriter$UnsafeRowBuffer$$anon$1.getNext(PythonForeachWriter.scala:106) at org.apache.spark.sql.execution.python.PythonForeachWriter$UnsafeRowBuffer$$anon$1.getNext(PythonForeachWriter.scala:104) at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73) at org.apache.spark.sql.execution.python.PythonForeachWriterSuite$BufferTester$$anon$1.run(PythonForeachWriterSuite.scala:105) [info] - UnsafeRowBuffer: handles more data than memory (2 seconds, 289 milliseconds) [info] AlterTableDropPartitionSuite: [info] - ALTER TABLE .. DROP PARTITION V2: single partition (211 milliseconds) [info] - ALTER TABLE .. DROP PARTITION V2: multiple partitions (103 milliseconds) [info] - ALTER TABLE .. DROP PARTITION V2: multi-part partition (106 milliseconds) [info] - ALTER TABLE .. DROP PARTITION V2: table to alter does not exist (31 milliseconds) [info] - ALTER TABLE .. DROP PARTITION V2: case sensitivity in resolving partition specs (110 milliseconds) [info] - ALTER TABLE .. DROP PARTITION V2: SPARK-33676: not fully specified partition spec (28 milliseconds) [info] - ALTER TABLE .. DROP PARTITION V2: partition not exists (72 milliseconds) [info] - ALTER TABLE .. DROP PARTITION V2: SPARK-33990: don not return data from dropped partition (260 milliseconds) [info] - ALTER TABLE .. DROP PARTITION V2: SPARK-33950, SPARK-33987: refresh cache after partition dropping (332 milliseconds) [info] - ALTER TABLE .. DROP PARTITION V2: SPARK-33591: null as a partition value (60 milliseconds) [info] - ALTER TABLE .. DROP PARTITION V2: SPARK-33650: drop partition into a table which doesn't support partition management (26 milliseconds) [info] - ALTER TABLE .. DROP PARTITION V2: purge partition data (50 milliseconds) [info] - ALTER TABLE .. DROP PARTITION V2: empty string as partition value (54 milliseconds) 09:49:30.660 WARN org.apache.spark.sql.execution.command.v2.AlterTableDropPartitionSuite: ===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.execution.command.v2.AlterTableDropPartitionSuite, thread names: shuffle-boss-109-1, rpc-boss-106-1 ===== [info] UnsafeKVExternalSorterSuite: [info] - kv sorting key schema [] and value schema [] (87 milliseconds) [info] - kv sorting key schema [int] and value schema [] (86 milliseconds) [info] - kv sorting key schema [] and value schema [int] (63 milliseconds) [info] - kv sorting key schema [int] and value schema [float,float,double,string,float] (26 milliseconds) [info] - kv sorting key schema [double,string,string,int,float,string,string] and value schema [double,int,string,int,double,string,double] (260 milliseconds) [info] - kv sorting key schema [int,string,float,int,int,string] and value schema [double,float,float,string,string,double,float,float,float,float] (135 milliseconds) [info] - kv sorting key schema [string,double,int,int,string,string] and value schema [int,int,string,float,float,double,double] (320 milliseconds) [info] - kv sorting key schema [int,float,float,int,float,float,int,int,float,int] and value schema [double,float,float,double] (115 milliseconds) [info] - kv sorting key schema [double,int,string,double,float,float] and value schema [float,string] (117 milliseconds) [info] - kv sorting with records that exceed page size (90 milliseconds) [info] - SPARK-23376: Create UnsafeKVExternalSorter with BytesToByteMap having duplicated keys (37 milliseconds) [info] - SPARK-31952: create UnsafeKVExternalSorter with existing map should count spilled memory size correctly (34 milliseconds) [info] - SPARK-31952: UnsafeKVExternalSorter.merge should accumulate totalSpillBytes (50 milliseconds) 09:49:32.834 WARN org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite: ===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.execution.UnsafeKVExternalSorterSuite, thread names: shuffle-boss-115-1, rpc-boss-112-1 ===== [info] ShowPartitionsSuite: [info] - SHOW PARTITIONS V1: show partitions of non-partitioned table (91 milliseconds) [info] - SHOW PARTITIONS V1: non-partitioning columns (554 milliseconds) [info] - SHOW PARTITIONS V1: show everything (554 milliseconds) [info] - SHOW PARTITIONS V1: filter by partitions (668 milliseconds) [info] - SHOW PARTITIONS V1: show everything more than 5 part keys (436 milliseconds) [info] - SHOW PARTITIONS V1: SPARK-33667: case sensitivity of partition spec (302 milliseconds) [info] - SHOW PARTITIONS V1: SPARK-33777: sorted output (97 milliseconds) [info] - SHOW PARTITIONS V1: show everything in the default database (578 milliseconds) [info] - SHOW PARTITIONS V1: show partitions of a view (534 milliseconds) [info] - SHOW PARTITIONS V1: show partitions of a temporary view (11 milliseconds) [info] - SHOW PARTITIONS V1: SPARK-33591: null as a partition value (392 milliseconds) 09:49:37.128 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored: [info] - SHOW PARTITIONS V1: issue exceptions on the temporary view (16 milliseconds) [info] - SHOW PARTITIONS V1: show partitions from a datasource (406 milliseconds) [info] - SHOW PARTITIONS V1: SPARK-33904: null and empty string as partition values (492 milliseconds) 09:49:38.064 WARN org.apache.spark.sql.execution.command.v1.ShowPartitionsSuite: ===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.execution.command.v1.ShowPartitionsSuite, thread names: shuffle-boss-121-1, rpc-boss-118-1 ===== [info] ReplaceNullWithFalseInPredicateEndToEndSuite: [info] - SPARK-25860: Replace Literal(null, _) with FalseLiteral whenever possible (1 second, 113 milliseconds) [info] - SPARK-26107: Replace Literal(null, _) with FalseLiteral in higher-order functions (933 milliseconds) [info] - SPARK-33847: replace None of elseValue inside CaseWhen to FalseLiteral (230 milliseconds) 09:49:40.414 WARN org.apache.spark.sql.ReplaceNullWithFalseInPredicateEndToEndSuite: ===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.ReplaceNullWithFalseInPredicateEndToEndSuite, thread names: rpc-boss-124-1, shuffle-boss-127-1 ===== [info] ContinuousEpochBacklogSuite: 09:49:40.474 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. 09:49:40.658 ERROR org.apache.spark.sql.execution.streaming.continuous.ContinuousExecution: Query [id = 730696d3-e64b-4d3b-80f0-491247901e94, runId = 216e14de-8dfb-4bf7-94e6-4d087620bec7] received exception java.lang.IllegalStateException: Size of the partition offset queue has exceeded its maximum 09:49:40.660 WARN org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor: Current batch is falling behind. The trigger interval is 1 milliseconds, but spent 2 milliseconds 09:49:40.662 ERROR org.apache.spark.sql.execution.streaming.continuous.ContinuousExecution: Query [id = 730696d3-e64b-4d3b-80f0-491247901e94, runId = 216e14de-8dfb-4bf7-94e6-4d087620bec7] terminated with error java.lang.IllegalStateException: Size of the partition offset queue has exceeded its maximum at org.apache.spark.sql.execution.streaming.continuous.EpochCoordinator.org$apache$spark$sql$execution$streaming$continuous$EpochCoordinator$$checkProcessingQueueBoundaries(EpochCoordinator.scala:235) at org.apache.spark.sql.execution.streaming.continuous.EpochCoordinator$$anonfun$receive$1.applyOrElse(EpochCoordinator.scala:230) at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:115) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75) at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:49:40.663 ERROR org.apache.spark.util.Utils: Aborting task org.apache.spark.SparkException: Could not find EpochCoordinator-216e14de-8dfb-4bf7-94e6-4d087620bec7--65d7b4c5-55a5-444c-a04a-880d83dd14f2. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:178) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:150) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:193) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:564) at org.apache.spark.sql.execution.streaming.continuous.ContinuousQueuedDataReader.next(ContinuousQueuedDataReader.scala:116) at org.apache.spark.sql.execution.streaming.continuous.ContinuousDataSourceRDD$$anon$1.getNext(ContinuousDataSourceRDD.scala:93) at org.apache.spark.sql.execution.streaming.continuous.ContinuousDataSourceRDD$$anon$1.getNext(ContinuousDataSourceRDD.scala:91) at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD.$anonfun$compute$1(ContinuousWriteRDD.scala:58) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1471) at org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD.compute(ContinuousWriteRDD.scala:84) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:49:40.664 ERROR org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD: Writer for partition 0 is aborting. 09:49:40.664 ERROR org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD: Writer for partition 0 aborted. [info] - epoch backlog overflow (213 milliseconds) 09:49:40.667 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0) (192.168.10.31 executor driver): TaskKilled (Stage cancelled) 09:49:40.693 WARN org.apache.spark.sql.streaming.continuous.ContinuousEpochBacklogSuite: ===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.streaming.continuous.ContinuousEpochBacklogSuite, thread names: shuffle-boss-133-1, rpc-boss-130-1 ===== [info] DeprecatedAPISuite: [info] - functions.toDegrees (582 milliseconds) [info] - functions.toRadians (536 milliseconds) [info] - functions.approxCountDistinct (525 milliseconds) [info] - functions.monotonicallyIncreasingId (167 milliseconds) [info] - Column.!== (243 milliseconds) [info] - Dataset.registerTempTable (11 milliseconds) [info] - SQLContext.setActive/clearActive (3 milliseconds) [info] - SQLContext.applySchema (158 milliseconds) [info] - SQLContext.parquetFile (324 milliseconds) [info] - SQLContext.jsonFile (1 second, 203 milliseconds) [info] - SQLContext.load (767 milliseconds) 09:49:45.292 WARN org.apache.spark.sql.DeprecatedAPISuite: ===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.DeprecatedAPISuite, thread names: rpc-boss-136-1, shuffle-boss-139-1 ===== [info] NullableColumnAccessorSuite: [info] - Nullable NULL column accessor: empty column (4 milliseconds) [info] - Nullable NULL column accessor: access null values (12 milliseconds) [info] - Nullable BOOLEAN column accessor: empty column (0 milliseconds) [info] - Nullable BOOLEAN column accessor: access null values (6 milliseconds) [info] - Nullable BYTE column accessor: empty column (1 millisecond) [info] - Nullable BYTE column accessor: access null values (7 milliseconds) [info] - Nullable SHORT column accessor: empty column (1 millisecond) [info] - Nullable SHORT column accessor: access null values (7 milliseconds) [info] - Nullable INT column accessor: empty column (0 milliseconds) [info] - Nullable INT column accessor: access null values (6 milliseconds) [info] - Nullable LONG column accessor: empty column (0 milliseconds) [info] - Nullable LONG column accessor: access null values (1 millisecond) [info] - Nullable FLOAT column accessor: empty column (1 millisecond) [info] - Nullable FLOAT column accessor: access null values (6 milliseconds) [info] - Nullable DOUBLE column accessor: empty column (1 millisecond) [info] - Nullable DOUBLE column accessor: access null values (6 milliseconds) [info] - Nullable STRING column accessor: empty column (0 milliseconds) [info] - Nullable STRING column accessor: access null values (1 millisecond) [info] - Nullable BINARY column accessor: empty column (0 milliseconds) [info] - Nullable BINARY column accessor: access null values (7 milliseconds) [info] - Nullable COMPACT_DECIMAL column accessor: empty column (0 milliseconds) [info] - Nullable COMPACT_DECIMAL column accessor: access null values (7 milliseconds) [info] - Nullable LARGE_DECIMAL column accessor: empty column (0 milliseconds) [info] - Nullable LARGE_DECIMAL column accessor: access null values (7 milliseconds) [info] - Nullable STRUCT column accessor: empty column (1 millisecond) [info] - Nullable STRUCT column accessor: access null values (7 milliseconds) [info] - Nullable ARRAY column accessor: empty column (1 millisecond) [info] - Nullable ARRAY column accessor: access null values (2 milliseconds) [info] - Nullable MAP column accessor: empty column (2 milliseconds) [info] - Nullable MAP column accessor: access null values (15 milliseconds) [info] - Nullable CALENDAR_INTERVAL column accessor: empty column (1 millisecond) [info] - Nullable CALENDAR_INTERVAL column accessor: access null values (7 milliseconds) [info] MergedParquetReadSchemaSuite: [info] - append column at the end (665 milliseconds) [info] - hide column at the end (648 milliseconds) [info] - append column into middle (523 milliseconds) [info] - hide column in the middle (428 milliseconds) [info] - add a nested column at the end of the leaf struct column (472 milliseconds) [info] - add a nested column in the middle of the leaf struct column (441 milliseconds) [info] - add a nested column at the end of the middle struct column (449 milliseconds) [info] - add a nested column in the middle of the middle struct column (506 milliseconds) [info] - hide a nested column at the end of the leaf struct column (626 milliseconds) [info] - hide a nested column in the middle of the leaf struct column (593 milliseconds) [info] - hide a nested column at the end of the middle struct column (612 milliseconds) [info] - hide a nested column in the middle of the middle struct column (612 milliseconds) [info] - change column position (583 milliseconds) 09:49:52.692 WARN org.apache.spark.sql.execution.datasources.MergedParquetReadSchemaSuite: ===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.execution.datasources.MergedParquetReadSchemaSuite, thread names: block-manager-storage-async-thread-pool-89, block-manager-storage-async-thread-pool-37, block-manager-storage-async-thread-pool-50, block-manager-storage-async-thread-pool-57, block-manager-storage-async-thread-pool-44, rpc-boss-142-1, block-manager-storage-async-thread-pool-15, block-manager-storage-async-thread-pool-25, block-manager-storage-async-thread-pool-79, shuffle-boss-145-1, block-manager-storage-async-thread-pool-6, block-manager-storage-async-thread-pool-14, block-manager-storage-async-thread-pool-42, block-manager-storage-async-thread-pool-72, block-manager-storage-async-thread-pool-95, block-manager-storage-async-thread-pool-28, block-manager-storage-async-thread-pool-63, block-manager-storage-async-thread-pool-1, block-manager-storage-async-thread-pool-18, block-manager-storage-async-thread-pool-5, block-manager-storage-async-thread-pool-35, block-manager-storage-async-thread-pool-88, block-manager-storage-async-thread-pool-49, block-manager-storage-async-thread-pool-85, block-manager-storage-async-thread-pool-90, block-manager-storage-async-thread-pool-81, block-manager-storage-async-thread-pool-69, block-manager-storage-async-thread-pool-97, block-manager-storage-async-thread-pool-75, block-manager-storage-async-thread-pool-58 ===== [info] StreamingQueryListenersConfSuite: 09:49:52.762 WARN org.apache.spark.sql.streaming.StreamingQueryManager: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled. [info] - test if the configured query listener is loaded (83 milliseconds) 09:49:52.850 WARN org.apache.spark.sql.streaming.StreamingQueryListenersConfSuite: ===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.streaming.StreamingQueryListenersConfSuite, thread names: shuffle-boss-151-1, rpc-boss-148-1 ===== [info] OrcReadSchemaSuite: [info] - append column at the end (645 milliseconds) [info] - hide column at the end (573 milliseconds) [info] - append column into middle (490 milliseconds) [info] - hide column in the middle (440 milliseconds) [info] - add a nested column at the end of the leaf struct column (500 milliseconds) [info] - add a nested column in the middle of the leaf struct column (447 milliseconds) [info] - add a nested column at the end of the middle struct column (434 milliseconds) [info] - add a nested column in the middle of the middle struct column (440 milliseconds) [info] - hide a nested column at the end of the leaf struct column (583 milliseconds) [info] - hide a nested column in the middle of the leaf struct column (638 milliseconds) [info] - hide a nested column at the end of the middle struct column (581 milliseconds) [info] - hide a nested column in the middle of the middle struct column (611 milliseconds) [info] - change column position (484 milliseconds) 09:49:59.808 WARN org.apache.spark.sql.execution.datasources.OrcReadSchemaSuite: ===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.execution.datasources.OrcReadSchemaSuite, thread names: block-manager-ask-thread-pool-67, block-manager-ask-thread-pool-94, block-manager-ask-thread-pool-39, rpc-boss-154-1, block-manager-ask-thread-pool-47, block-manager-ask-thread-pool-80, shuffle-boss-157-1, block-manager-ask-thread-pool-89, block-manager-ask-thread-pool-27, block-manager-ask-thread-pool-7, block-manager-ask-thread-pool-38, block-manager-ask-thread-pool-35, block-manager-ask-thread-pool-14, block-manager-ask-thread-pool-41 ===== [info] JDBCSuite: [info] - SELECT * (59 milliseconds) [info] - SELECT * WHERE (simple predicates) (857 milliseconds) [info] - SELECT COUNT(1) WHERE (predicates) (91 milliseconds) [info] - SELECT * WHERE (quoted strings) (41 milliseconds) [info] - SELECT first field (52 milliseconds) [info] - SELECT first field when fetchsize is two (33 milliseconds) [info] - SELECT second field (50 milliseconds) [info] - SELECT second field when fetchsize is two (33 milliseconds) [info] - SELECT * partitioned (39 milliseconds) [info] - SELECT WHERE (simple predicates) partitioned (119 milliseconds) [info] - SELECT second field partitioned (37 milliseconds) [info] - overflow of partition bound difference does not give negative stride (32 milliseconds) [info] - Register JDBC query with renamed fields (16 milliseconds) [info] - Basic API (28 milliseconds) [info] - Missing partition columns (29 milliseconds) [info] - Basic API with FetchSize (177 milliseconds) [info] - Partitioning via JDBCPartitioningInfo API (40 milliseconds) [info] - Partitioning via list-of-where-clauses API (35 milliseconds) [info] - Partitioning on column that might have null values. (131 milliseconds) [info] - Partitioning on column where numPartitions is zero (61 milliseconds) 09:50:02.008 WARN org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation: The number of partitions is reduced because the specified number of partitions is less than the difference between upper bound and lower bound. Updated number of partitions: 4; Input number of partitions: 10; Lower bound: 1; Upper bound: 5. [info] - Partitioning on column where numPartitions are more than the number of total rows (67 milliseconds) [info] - Partitioning on column where lowerBound is equal to upperBound (59 milliseconds) [info] - Partitioning on column where lowerBound is larger than upperBound (5 milliseconds) [info] - SELECT * on partitioned table with a nullable partition column (37 milliseconds) [info] - H2 integral types (52 milliseconds) [info] - H2 null entries (33 milliseconds) [info] - H2 string types (49 milliseconds) [info] - H2 time types (53 milliseconds) [info] - SPARK-33888: test TIME types (120 milliseconds) [info] - test DATE types (76 milliseconds) [info] - test DATE types in cache (98 milliseconds) [info] - test types for null value (38 milliseconds) [info] - H2 floating-point types (116 milliseconds) [info] - SQL query as table name (60 milliseconds) [info] - Pass extra properties via OPTIONS (12 milliseconds) [info] - Remap types via JdbcDialects (40 milliseconds) [info] - Map TINYINT to ByteType via JdbcDialects (52 milliseconds) [info] - Default jdbc dialect registration (1 millisecond) [info] - quote column names by jdbc dialect (2 milliseconds) [info] - compile filters (5 milliseconds) [info] - Dialect unregister (0 milliseconds) [info] - Aggregated dialects (3 milliseconds) [info] - Aggregated dialects: isCascadingTruncateTable (1 millisecond) [info] - DB2Dialect type mapping (2 milliseconds) [info] - PostgresDialect type mapping (3 milliseconds) [info] - DerbyDialect jdbc type mapping (1 millisecond) [info] - OracleDialect jdbc type mapping (1 millisecond) [info] - MsSqlServerDialect jdbc type mapping (4 milliseconds) [info] - SPARK-28152 MsSqlServerDialect catalyst type mapping (1 millisecond) [info] - table exists query by jdbc dialect (0 milliseconds) [info] - truncate table query by jdbc dialect (0 milliseconds) [info] - SPARK-22880: Truncate table with CASCADE by jdbc dialect (0 milliseconds) [info] - Test DataFrame.where for Date and Timestamp (39 milliseconds) [info] - test credentials in the properties are not in plan output (20 milliseconds) [info] - test credentials in the connection url are not in the plan output (15 milliseconds) [info] - hide credentials in create and describe a persistent/temp table (111 milliseconds) [info] - Hide credentials in show create table (67 milliseconds) [info] - Replace CatalogUtils.maskCredentials with SQLConf.get.redactOptions (173 milliseconds) [info] - SPARK 12941: The data type mapping for StringType to Oracle (1 millisecond) [info] - SPARK-16625: General data types to be mapped to Oracle (1 millisecond) [info] - SPARK-15916: JDBC filter operator push down should respect operator precedence (128 milliseconds) [info] - SPARK-16387: Reserved SQL words are not escaped by JDBC writer (5 milliseconds) [info] - SPARK-18141: Predicates on quoted column names in the jdbc data source (597 milliseconds) [info] - SPARK-18419: Fix `asConnectionProperties` to filter case-insensitively (0 milliseconds) [info] - SPARK-16848: jdbc API throws an exception for user specified schema (2 milliseconds) [info] - jdbc API support custom schema (63 milliseconds) [info] - jdbc API custom schema DDL-like strings. (67 milliseconds) [info] - SPARK-15648: teradataDialect StringType data mapping (1 millisecond) [info] - SPARK-15648: teradataDialect BooleanType data mapping (0 milliseconds) [info] - Checking metrics correctness with JDBC (95 milliseconds) [info] - unsupported types (11 milliseconds) [info] - SPARK-19318: Connection properties keys should be case-sensitive. (0 milliseconds) [info] - SPARK-19318: jdbc data source options should be treated case-insensitive. (145 milliseconds) 09:50:04.620 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 111.0 (TID 130) org.h2.jdbc.JdbcSQLException: Schema "DUMMY" not found; SQL statement: SET SCHEMA DUMMY [90079-195] at org.h2.message.DbException.getJdbcSQLException(DbException.java:345) at org.h2.message.DbException.get(DbException.java:179) at org.h2.message.DbException.get(DbException.java:155) at org.h2.engine.Database.getSchema(Database.java:1755) at org.h2.command.dml.Set.update(Set.java:408) at org.h2.command.CommandContainer.update(CommandContainer.java:101) at org.h2.command.Command.executeUpdate(Command.java:260) at org.h2.jdbc.JdbcPreparedStatement.execute(JdbcPreparedStatement.java:207) at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD.compute(JDBCRDD.scala:286) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:50:04.624 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 111.0 (TID 130) (192.168.10.31 executor driver): org.h2.jdbc.JdbcSQLException: Schema "DUMMY" not found; SQL statement: SET SCHEMA DUMMY [90079-195] at org.h2.message.DbException.getJdbcSQLException(DbException.java:345) at org.h2.message.DbException.get(DbException.java:179) at org.h2.message.DbException.get(DbException.java:155) at org.h2.engine.Database.getSchema(Database.java:1755) at org.h2.command.dml.Set.update(Set.java:408) at org.h2.command.CommandContainer.update(CommandContainer.java:101) at org.h2.command.Command.executeUpdate(Command.java:260) at org.h2.jdbc.JdbcPreparedStatement.execute(JdbcPreparedStatement.java:207) at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD.compute(JDBCRDD.scala:286) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:50:04.624 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 111.0 failed 1 times; aborting job [info] - SPARK-21519: option sessionInitStatement, run SQL to initialize the database session. (129 milliseconds) [info] - jdbc data source shouldn't have unnecessary metadata in its schema (18 milliseconds) [info] - postgreSQL/numeric.sql (1 minute, 16 seconds) 09:50:08.950 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 113.0 (TID 132) org.h2.jdbc.JdbcSQLException: Statement was canceled or the session timed out; SQL statement: SELECT "C0","C1","C2","C3","C4","C5","C6","C7","C8","C9","C10","C11","C12","C13","C14","C15","C16","C17","C18","C19","C20","C21","C22","C23","C24","C25","C26","C27","C28","C29","C30","C31","C32","C33","C34","C35","C36","C37","C38","C39","C40","C41","C42","C43","C44","C45","C46","C47","C48","C49","C50","C51","C52","C53","C54","C55","C56","C57","C58","C59","C60","C61","C62","C63","C64","C65","C66","C67","C68","C69","C70","C71","C72","C73","C74","C75","C76","C77","C78","C79","C80","C81","C82","C83","C84","C85","C86","C87","C88","C89","C90","C91","C92","C93","C94","C95","C96","C97","C98","C99","C100" FROM (SELECT t0.NAME AS c0, t1.NAME AS c1, t2.NAME AS c2, t3.NAME AS c3, t4.NAME AS c4, t5.NAME AS c5, t6.NAME AS c6, t7.NAME AS c7, t8.NAME AS c8, t9.NAME AS c9, t10.NAME AS c10, t11.NAME AS c11, t12.NAME AS c12, t13.NAME AS c13, t14.NAME AS c14, t15.NAME AS c15, t16.NAME AS c16, t17.NAME AS c17, t18.NAME AS c18, t19.NAME AS c19, t20.NAME AS c20, t21.NAME AS c21, t22.NAME AS c22, t23.NAME AS c23, t24.NAME AS c24, t25.NAME AS c25, t26.NAME AS c26, t27.NAME AS c27, t28.NAME AS c28, t29.NAME AS c29, t30.NAME AS c30, t31.NAME AS c31, t32.NAME AS c32, t33.NAME AS c33, t34.NAME AS c34, t35.NAME AS c35, t36.NAME AS c36, t37.NAME AS c37, t38.NAME AS c38, t39.NAME AS c39, t40.NAME AS c40, t41.NAME AS c41, t42.NAME AS c42, t43.NAME AS c43, t44.NAME AS c44, t45.NAME AS c45, t46.NAME AS c46, t47.NAME AS c47, t48.NAME AS c48, t49.NAME AS c49, t50.NAME AS c50, t51.NAME AS c51, t52.NAME AS c52, t53.NAME AS c53, t54.NAME AS c54, t55.NAME AS c55, t56.NAME AS c56, t57.NAME AS c57, t58.NAME AS c58, t59.NAME AS c59, t60.NAME AS c60, t61.NAME AS c61, t62.NAME AS c62, t63.NAME AS c63, t64.NAME AS c64, t65.NAME AS c65, t66.NAME AS c66, t67.NAME AS c67, t68.NAME AS c68, t69.NAME AS c69, t70.NAME AS c70, t71.NAME AS c71, t72.NAME AS c72, t73.NAME AS c73, t74.NAME AS c74, t75.NAME AS c75, t76.NAME AS c76, t77.NAME AS c77, t78.NAME AS c78, t79.NAME AS c79, t80.NAME AS c80, t81.NAME AS c81, t82.NAME AS c82, t83.NAME AS c83, t84.NAME AS c84, t85.NAME AS c85, t86.NAME AS c86, t87.NAME AS c87, t88.NAME AS c88, t89.NAME AS c89, t90.NAME AS c90, t91.NAME AS c91, t92.NAME AS c92, t93.NAME AS c93, t94.NAME AS c94, t95.NAME AS c95, t96.NAME AS c96, t97.NAME AS c97, t98.NAME AS c98, t99.NAME AS c99, t100.NAME AS c100 FROM test.people t0 join test.people t1 join test.people t2 join test.people t3 join test.people t4 join test.people t5 join test.people t6 join test.people t7 join test.people t8 join test.people t9 join test.people t10 join test.people t11 join test.people t12 join test.people t13 join test.people t14 join test.people t15 join test.people t16 join test.people t17 join test.people t18 join test.people t19 join test.people t20 join test.people t21 join test.people t22 join test.people t23 join test.people t24 join test.people t25 join test.people t26 join test.people t27 join test.people t28 join test.people t29 join test.people t30 join test.people t31 join test.people t32 join test.people t33 join test.people t34 join test.people t35 join test.people t36 join test.people t37 join test.people t38 join test.people t39 join test.people t40 join test.people t41 join test.people t42 join test.people t43 join test.people t44 join test.people t45 join test.people t46 join test.people t47 join test.people t48 join test.people t49 join test.people t50 join test.people t51 join test.people t52 join test.people t53 join test.people t54 join test.people t55 join test.people t56 join test.people t57 join test.people t58 join test.people t59 join test.people t60 join test.people t61 join test.people t62 join test.people t63 join test.people t64 join test.people t65 join test.people t66 join test.people t67 join test.people t68 join test.people t69 join test.people t70 join test.people t71 join test.people t72 join test.people t73 join test.people t74 join test.people t75 join test.people t76 join test.people t77 join test.people t78 join test.people t79 join test.people t80 join test.people t81 join test.people t82 join test.people t83 join test.people t84 join test.people t85 join test.people t86 join test.people t87 join test.people t88 join test.people t89 join test.people t90 join test.people t91 join test.people t92 join test.people t93 join test.people t94 join test.people t95 join test.people t96 join test.people t97 join test.people t98 join test.people t99 join test.people t100) [57014-195] at org.h2.message.DbException.getJdbcSQLException(DbException.java:345) at org.h2.message.DbException.get(DbException.java:179) at org.h2.message.DbException.get(DbException.java:155) at org.h2.message.DbException.get(DbException.java:144) at org.h2.engine.Session.checkCanceled(Session.java:1211) at org.h2.command.Prepared.checkCanceled(Prepared.java:275) at org.h2.command.Prepared.setCurrentRowNumber(Prepared.java:341) at org.h2.command.dml.Select.access$500(Select.java:64) at org.h2.command.dml.Select$LazyResultQueryFlat.fetchNextRow(Select.java:1454) at org.h2.result.LazyResult.hasNext(LazyResult.java:79) at org.h2.result.LazyResult.next(LazyResult.java:59) at org.h2.command.dml.Select.queryFlat(Select.java:519) at org.h2.command.dml.Select.queryWithoutCache(Select.java:625) at org.h2.command.dml.Query.queryWithoutCacheLazyCheck(Query.java:114) at org.h2.command.dml.Query.query(Query.java:371) at org.h2.command.dml.Query.query(Query.java:333) at org.h2.index.ViewIndex.find(ViewIndex.java:291) at org.h2.index.ViewIndex.find(ViewIndex.java:161) at org.h2.index.BaseIndex.find(BaseIndex.java:128) at org.h2.index.IndexCursor.find(IndexCursor.java:169) at org.h2.table.TableFilter.next(TableFilter.java:468) at org.h2.command.dml.Select$LazyResultQueryFlat.fetchNextRow(Select.java:1452) at org.h2.result.LazyResult.hasNext(LazyResult.java:79) at org.h2.result.LazyResult.next(LazyResult.java:59) at org.h2.command.dml.Select.queryFlat(Select.java:519) at org.h2.command.dml.Select.queryWithoutCache(Select.java:625) at org.h2.command.dml.Query.queryWithoutCacheLazyCheck(Query.java:114) at org.h2.command.dml.Query.query(Query.java:371) at org.h2.command.dml.Query.query(Query.java:333) at org.h2.command.CommandContainer.query(CommandContainer.java:113) at org.h2.command.Command.executeQuery(Command.java:201) at org.h2.jdbc.JdbcPreparedStatement.executeQuery(JdbcPreparedStatement.java:111) at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD.compute(JDBCRDD.scala:304) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:50:08.958 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 113.0 (TID 132) (192.168.10.31 executor driver): org.h2.jdbc.JdbcSQLException: Statement was canceled or the session timed out; SQL statement: SELECT "C0","C1","C2","C3","C4","C5","C6","C7","C8","C9","C10","C11","C12","C13","C14","C15","C16","C17","C18","C19","C20","C21","C22","C23","C24","C25","C26","C27","C28","C29","C30","C31","C32","C33","C34","C35","C36","C37","C38","C39","C40","C41","C42","C43","C44","C45","C46","C47","C48","C49","C50","C51","C52","C53","C54","C55","C56","C57","C58","C59","C60","C61","C62","C63","C64","C65","C66","C67","C68","C69","C70","C71","C72","C73","C74","C75","C76","C77","C78","C79","C80","C81","C82","C83","C84","C85","C86","C87","C88","C89","C90","C91","C92","C93","C94","C95","C96","C97","C98","C99","C100" FROM (SELECT t0.NAME AS c0, t1.NAME AS c1, t2.NAME AS c2, t3.NAME AS c3, t4.NAME AS c4, t5.NAME AS c5, t6.NAME AS c6, t7.NAME AS c7, t8.NAME AS c8, t9.NAME AS c9, t10.NAME AS c10, t11.NAME AS c11, t12.NAME AS c12, t13.NAME AS c13, t14.NAME AS c14, t15.NAME AS c15, t16.NAME AS c16, t17.NAME AS c17, t18.NAME AS c18, t19.NAME AS c19, t20.NAME AS c20, t21.NAME AS c21, t22.NAME AS c22, t23.NAME AS c23, t24.NAME AS c24, t25.NAME AS c25, t26.NAME AS c26, t27.NAME AS c27, t28.NAME AS c28, t29.NAME AS c29, t30.NAME AS c30, t31.NAME AS c31, t32.NAME AS c32, t33.NAME AS c33, t34.NAME AS c34, t35.NAME AS c35, t36.NAME AS c36, t37.NAME AS c37, t38.NAME AS c38, t39.NAME AS c39, t40.NAME AS c40, t41.NAME AS c41, t42.NAME AS c42, t43.NAME AS c43, t44.NAME AS c44, t45.NAME AS c45, t46.NAME AS c46, t47.NAME AS c47, t48.NAME AS c48, t49.NAME AS c49, t50.NAME AS c50, t51.NAME AS c51, t52.NAME AS c52, t53.NAME AS c53, t54.NAME AS c54, t55.NAME AS c55, t56.NAME AS c56, t57.NAME AS c57, t58.NAME AS c58, t59.NAME AS c59, t60.NAME AS c60, t61.NAME AS c61, t62.NAME AS c62, t63.NAME AS c63, t64.NAME AS c64, t65.NAME AS c65, t66.NAME AS c66, t67.NAME AS c67, t68.NAME AS c68, t69.NAME AS c69, t70.NAME AS c70, t71.NAME AS c71, t72.NAME AS c72, t73.NAME AS c73, t74.NAME AS c74, t75.NAME AS c75, t76.NAME AS c76, t77.NAME AS c77, t78.NAME AS c78, t79.NAME AS c79, t80.NAME AS c80, t81.NAME AS c81, t82.NAME AS c82, t83.NAME AS c83, t84.NAME AS c84, t85.NAME AS c85, t86.NAME AS c86, t87.NAME AS c87, t88.NAME AS c88, t89.NAME AS c89, t90.NAME AS c90, t91.NAME AS c91, t92.NAME AS c92, t93.NAME AS c93, t94.NAME AS c94, t95.NAME AS c95, t96.NAME AS c96, t97.NAME AS c97, t98.NAME AS c98, t99.NAME AS c99, t100.NAME AS c100 FROM test.people t0 join test.people t1 join test.people t2 join test.people t3 join test.people t4 join test.people t5 join test.people t6 join test.people t7 join test.people t8 join test.people t9 join test.people t10 join test.people t11 join test.people t12 join test.people t13 join test.people t14 join test.people t15 join test.people t16 join test.people t17 join test.people t18 join test.people t19 join test.people t20 join test.people t21 join test.people t22 join test.people t23 join test.people t24 join test.people t25 join test.people t26 join test.people t27 join test.people t28 join test.people t29 join test.people t30 join test.people t31 join test.people t32 join test.people t33 join test.people t34 join test.people t35 join test.people t36 join test.people t37 join test.people t38 join test.people t39 join test.people t40 join test.people t41 join test.people t42 join test.people t43 join test.people t44 join test.people t45 join test.people t46 join test.people t47 join test.people t48 join test.people t49 join test.people t50 join test.people t51 join test.people t52 join test.people t53 join test.people t54 join test.people t55 join test.people t56 join test.people t57 join test.people t58 join test.people t59 join test.people t60 join test.people t61 join test.people t62 join test.people t63 join test.people t64 join test.people t65 join test.people t66 join test.people t67 join test.people t68 join test.people t69 join test.people t70 join test.people t71 join test.people t72 join test.people t73 join test.people t74 join test.people t75 join test.people t76 join test.people t77 join test.people t78 join test.people t79 join test.people t80 join test.people t81 join test.people t82 join test.people t83 join test.people t84 join test.people t85 join test.people t86 join test.people t87 join test.people t88 join test.people t89 join test.people t90 join test.people t91 join test.people t92 join test.people t93 join test.people t94 join test.people t95 join test.people t96 join test.people t97 join test.people t98 join test.people t99 join test.people t100) [57014-195] at org.h2.message.DbException.getJdbcSQLException(DbException.java:345) at org.h2.message.DbException.get(DbException.java:179) at org.h2.message.DbException.get(DbException.java:155) at org.h2.message.DbException.get(DbException.java:144) at org.h2.engine.Session.checkCanceled(Session.java:1211) at org.h2.command.Prepared.checkCanceled(Prepared.java:275) at org.h2.command.Prepared.setCurrentRowNumber(Prepared.java:341) at org.h2.command.dml.Select.access$500(Select.java:64) at org.h2.command.dml.Select$LazyResultQueryFlat.fetchNextRow(Select.java:1454) at org.h2.result.LazyResult.hasNext(LazyResult.java:79) at org.h2.result.LazyResult.next(LazyResult.java:59) at org.h2.command.dml.Select.queryFlat(Select.java:519) at org.h2.command.dml.Select.queryWithoutCache(Select.java:625) at org.h2.command.dml.Query.queryWithoutCacheLazyCheck(Query.java:114) at org.h2.command.dml.Query.query(Query.java:371) at org.h2.command.dml.Query.query(Query.java:333) at org.h2.index.ViewIndex.find(ViewIndex.java:291) at org.h2.index.ViewIndex.find(ViewIndex.java:161) at org.h2.index.BaseIndex.find(BaseIndex.java:128) at org.h2.index.IndexCursor.find(IndexCursor.java:169) at org.h2.table.TableFilter.next(TableFilter.java:468) at org.h2.command.dml.Select$LazyResultQueryFlat.fetchNextRow(Select.java:1452) at org.h2.result.LazyResult.hasNext(LazyResult.java:79) at org.h2.result.LazyResult.next(LazyResult.java:59) at org.h2.command.dml.Select.queryFlat(Select.java:519) at org.h2.command.dml.Select.queryWithoutCache(Select.java:625) at org.h2.command.dml.Query.queryWithoutCacheLazyCheck(Query.java:114) at org.h2.command.dml.Query.query(Query.java:371) at org.h2.command.dml.Query.query(Query.java:333) at org.h2.command.CommandContainer.query(CommandContainer.java:113) at org.h2.command.Command.executeQuery(Command.java:201) at org.h2.jdbc.JdbcPreparedStatement.executeQuery(JdbcPreparedStatement.java:111) at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD.compute(JDBCRDD.scala:304) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:50:08.959 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 113.0 failed 1 times; aborting job [info] - SPARK-23856 Spark jdbc setQueryTimeout option (4 seconds, 262 milliseconds) [info] - SPARK-24327 verify and normalize a partition column based on a JDBC resolved schema (37 milliseconds) [info] - query JDBC option - negative tests (29 milliseconds) [info] - query JDBC option (192 milliseconds) [info] - SPARK-22814 support date/timestamp types in partitionColumn (216 milliseconds) [info] - throws an exception for unsupported partition column types (4 milliseconds) [info] - SPARK-24288: Enable preventing predicate pushdown (236 milliseconds) [info] - SPARK-26383 throw IllegalArgumentException if wrong kind of driver to the given url (6 milliseconds) [info] - support casting patterns for lower/upper bounds of TimestampType (201 milliseconds) [info] - Add exception when isolationLevel is Illegal (3 milliseconds) [info] - SPARK-28552: Case-insensitive database URLs in JdbcDialect (0 milliseconds) [info] - SQLContext.jdbc (deprecated) (251 milliseconds) [info] - SPARK-32364: JDBCOption constructor (0 milliseconds) 09:50:10.186 WARN org.apache.spark.sql.jdbc.JDBCSuite: ===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.jdbc.JDBCSuite, thread names: block-manager-ask-thread-pool-78, block-manager-ask-thread-pool-62, block-manager-ask-thread-pool-73, block-manager-ask-thread-pool-23, shuffle-boss-163-1, block-manager-ask-thread-pool-39, block-manager-ask-thread-pool-47, block-manager-ask-thread-pool-95, block-manager-ask-thread-pool-26, block-manager-ask-thread-pool-79, rpc-boss-160-1, block-manager-ask-thread-pool-17, block-manager-ask-thread-pool-81, block-manager-ask-thread-pool-64, block-manager-ask-thread-pool-3, block-manager-ask-thread-pool-99, block-manager-ask-thread-pool-63, block-manager-ask-thread-pool-77, block-manager-ask-thread-pool-74, block-manager-ask-thread-pool-28, block-manager-ask-thread-pool-41, block-manager-ask-thread-pool-87 ===== [info] BasicWriteJobStatsTrackerMetricSuite: [info] - SPARK-32978: make sure the number of dynamic part metric is correct (1 second, 663 milliseconds) 09:50:11.859 WARN org.apache.spark.sql.execution.datasources.BasicWriteJobStatsTrackerMetricSuite: ===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.execution.datasources.BasicWriteJobStatsTrackerMetricSuite, thread names: shuffle-boss-169-1, rpc-boss-166-1 ===== [info] HeaderCSVReadSchemaSuite: 09:50:12.560 WARN org.apache.spark.sql.catalyst.csv.CSVHeaderChecker: Number of column in CSV header is not equal to number of fields in the schema: Header length: 2, schema size: 3 CSV file: file:///home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/spark-45356a30-e96a-4774-bb1b-8e3d90f2d495/part=two/part-00000-490ec34a-7037-43dd-a798-b589587a06f2-c000.csv 09:50:12.564 WARN org.apache.spark.sql.catalyst.csv.CSVHeaderChecker: Number of column in CSV header is not equal to number of fields in the schema: Header length: 2, schema size: 3 CSV file: file:///home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/spark-45356a30-e96a-4774-bb1b-8e3d90f2d495/part=two/part-00001-490ec34a-7037-43dd-a798-b589587a06f2-c000.csv 09:50:12.566 WARN org.apache.spark.sql.catalyst.csv.CSVHeaderChecker: Number of column in CSV header is not equal to number of fields in the schema: Header length: 1, schema size: 3 CSV file: file:///home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/spark-45356a30-e96a-4774-bb1b-8e3d90f2d495/part=one/part-00001-8b25ecc0-bdf4-469f-89c9-d9fda1c9f4ed-c000.csv 09:50:12.567 WARN org.apache.spark.sql.catalyst.csv.CSVHeaderChecker: Number of column in CSV header is not equal to number of fields in the schema: Header length: 1, schema size: 3 CSV file: file:///home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/spark-45356a30-e96a-4774-bb1b-8e3d90f2d495/part=one/part-00000-8b25ecc0-bdf4-469f-89c9-d9fda1c9f4ed-c000.csv 09:50:12.603 WARN org.apache.spark.sql.catalyst.csv.CSVHeaderChecker: Number of column in CSV header is not equal to number of fields in the schema: Header length: 2, schema size: 3 CSV file: file:///home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/spark-45356a30-e96a-4774-bb1b-8e3d90f2d495/part=two/part-00000-490ec34a-7037-43dd-a798-b589587a06f2-c000.csv 09:50:12.605 WARN org.apache.spark.sql.catalyst.csv.CSVHeaderChecker: Number of column in CSV header is not equal to number of fields in the schema: Header length: 1, schema size: 3 CSV file: file:///home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/spark-45356a30-e96a-4774-bb1b-8e3d90f2d495/part=one/part-00001-8b25ecc0-bdf4-469f-89c9-d9fda1c9f4ed-c000.csv 09:50:12.606 WARN org.apache.spark.sql.catalyst.csv.CSVHeaderChecker: Number of column in CSV header is not equal to number of fields in the schema: Header length: 2, schema size: 3 CSV file: file:///home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/spark-45356a30-e96a-4774-bb1b-8e3d90f2d495/part=two/part-00001-490ec34a-7037-43dd-a798-b589587a06f2-c000.csv 09:50:12.606 WARN org.apache.spark.sql.catalyst.csv.CSVHeaderChecker: Number of column in CSV header is not equal to number of fields in the schema: Header length: 1, schema size: 3 CSV file: file:///home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/spark-45356a30-e96a-4774-bb1b-8e3d90f2d495/part=one/part-00000-8b25ecc0-bdf4-469f-89c9-d9fda1c9f4ed-c000.csv [info] - append column at the end (714 milliseconds) 09:50:13.037 WARN org.apache.spark.sql.catalyst.csv.CSVHeaderChecker: Number of column in CSV header is not equal to number of fields in the schema: Header length: 3, schema size: 2 CSV file: file:///home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/spark-289276de-4a19-4f2d-a7f2-a674f6908940/part=three/part-00001-0a73c674-8ab6-4a4c-81c3-010f581fb153-c000.csv 09:50:13.039 WARN org.apache.spark.sql.catalyst.csv.CSVHeaderChecker: Number of column in CSV header is not equal to number of fields in the schema: Header length: 3, schema size: 2 CSV file: file:///home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/spark-289276de-4a19-4f2d-a7f2-a674f6908940/part=three/part-00000-0a73c674-8ab6-4a4c-81c3-010f581fb153-c000.csv 09:50:13.073 WARN org.apache.spark.sql.catalyst.csv.CSVHeaderChecker: Number of column in CSV header is not equal to number of fields in the schema: Header length: 3, schema size: 2 CSV file: file:///home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/spark-289276de-4a19-4f2d-a7f2-a674f6908940/part=three/part-00001-0a73c674-8ab6-4a4c-81c3-010f581fb153-c000.csv 09:50:13.075 WARN org.apache.spark.sql.catalyst.csv.CSVHeaderChecker: Number of column in CSV header is not equal to number of fields in the schema: Header length: 3, schema size: 2 CSV file: file:///home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/spark-289276de-4a19-4f2d-a7f2-a674f6908940/part=three/part-00000-0a73c674-8ab6-4a4c-81c3-010f581fb153-c000.csv 09:50:13.135 WARN org.apache.spark.sql.catalyst.csv.CSVHeaderChecker: Number of column in CSV header is not equal to number of fields in the schema: Header length: 2, schema size: 1 CSV file: file:///home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/spark-289276de-4a19-4f2d-a7f2-a674f6908940/part=two/part-00001-4e85e0e8-95ed-4f3d-a93d-936c7b796aa8-c000.csv 09:50:13.135 WARN org.apache.spark.sql.catalyst.csv.CSVHeaderChecker: Number of column in CSV header is not equal to number of fields in the schema: Header length: 3, schema size: 1 CSV file: file:///home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/spark-289276de-4a19-4f2d-a7f2-a674f6908940/part=three/part-00001-0a73c674-8ab6-4a4c-81c3-010f581fb153-c000.csv 09:50:13.137 WARN org.apache.spark.sql.catalyst.csv.CSVHeaderChecker: Number of column in CSV header is not equal to number of fields in the schema: Header length: 3, schema size: 1 CSV file: file:///home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/spark-289276de-4a19-4f2d-a7f2-a674f6908940/part=three/part-00000-0a73c674-8ab6-4a4c-81c3-010f581fb153-c000.csv 09:50:13.137 WARN org.apache.spark.sql.catalyst.csv.CSVHeaderChecker: Number of column in CSV header is not equal to number of fields in the schema: Header length: 2, schema size: 1 CSV file: file:///home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/spark-289276de-4a19-4f2d-a7f2-a674f6908940/part=two/part-00000-4e85e0e8-95ed-4f3d-a93d-936c7b796aa8-c000.csv 09:50:13.170 WARN org.apache.spark.sql.catalyst.csv.CSVHeaderChecker: Number of column in CSV header is not equal to number of fields in the schema: Header length: 2, schema size: 1 CSV file: file:///home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/spark-289276de-4a19-4f2d-a7f2-a674f6908940/part=two/part-00001-4e85e0e8-95ed-4f3d-a93d-936c7b796aa8-c000.csv 09:50:13.171 WARN org.apache.spark.sql.catalyst.csv.CSVHeaderChecker: Number of column in CSV header is not equal to number of fields in the schema: Header length: 3, schema size: 1 CSV file: file:///home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/spark-289276de-4a19-4f2d-a7f2-a674f6908940/part=three/part-00001-0a73c674-8ab6-4a4c-81c3-010f581fb153-c000.csv 09:50:13.172 WARN org.apache.spark.sql.catalyst.csv.CSVHeaderChecker: Number of column in CSV header is not equal to number of fields in the schema: Header length: 2, schema size: 1 CSV file: file:///home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/spark-289276de-4a19-4f2d-a7f2-a674f6908940/part=two/part-00000-4e85e0e8-95ed-4f3d-a93d-936c7b796aa8-c000.csv 09:50:13.173 WARN org.apache.spark.sql.catalyst.csv.CSVHeaderChecker: Number of column in CSV header is not equal to number of fields in the schema: Header length: 3, schema size: 1 CSV file: file:///home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/spark-289276de-4a19-4f2d-a7f2-a674f6908940/part=three/part-00000-0a73c674-8ab6-4a4c-81c3-010f581fb153-c000.csv [info] - hide column at the end (565 milliseconds) [info] - change column type from byte to short/int/long (629 milliseconds) [info] - change column type from short to int/long (412 milliseconds) [info] - change column type from int to long (264 milliseconds) [info] - read byte, int, short, long together (870 milliseconds) [info] - change column type from float to double (377 milliseconds) [info] - read float and double together (486 milliseconds) [info] - change column type from float to decimal (319 milliseconds) [info] - change column type from double to decimal (262 milliseconds) [info] - read float, double, decimal together (627 milliseconds) [info] - read as string (852 milliseconds) 09:50:18.328 WARN org.apache.spark.sql.execution.datasources.HeaderCSVReadSchemaSuite: ===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.execution.datasources.HeaderCSVReadSchemaSuite, thread names: block-manager-storage-async-thread-pool-48, block-manager-ask-thread-pool-40, block-manager-storage-async-thread-pool-7, block-manager-storage-async-thread-pool-19, block-manager-storage-async-thread-pool-89, block-manager-ask-thread-pool-51, block-manager-storage-async-thread-pool-78, block-manager-storage-async-thread-pool-73, block-manager-ask-thread-pool-90, block-manager-storage-async-thread-pool-2, block-manager-storage-async-thread-pool-80, block-manager-ask-thread-pool-15, block-manager-storage-async-thread-pool-91, block-manager-storage-async-thread-pool-57, block-manager-ask-thread-pool-36, block-manager-storage-async-thread-pool-44, block-manager-storage-async-thread-pool-30, block-manager-storage-async-thread-pool-26, block-manager-storage-async-thread-pool-96, rpc-boss-172-1, block-manager-storage-async-thread-pool-79, block-manager-storage-async-thread-pool-14, block-manager-ask-thread-pool-8, shuffle-boss-175-1, block-manager-ask-thread-pool-50, block-manager-storage-async-thread-pool-83, block-manager-ask-thread-pool-1, block-manager-storage-async-thread-pool-42, block-manager-storage-async-thread-pool-62, block-manager-storage-async-thread-pool-31, block-manager-storage-async-thread-pool-67, block-manager-storage-async-thread-pool-56, block-manager-storage-async-thread-pool-24, block-manager-storage-async-thread-pool-47, block-manager-storage-async-thread-pool-0, block-manager-storage-async-thread-pool-59, block-manager-ask-thread-pool-31, block-manager-storage-async-thread-pool-13, block-manager-ask-thread-pool-27, block-manager-storage-async-thread-pool-9, block-manager-ask-thread-pool-92, block-manager-storage-async-thread-pool-32, block-manager-ask-thread-pool-7, block-manager-storage-async-thread-pool-71, block-manager-storage-async-thread-pool-63, block-manager-storage-async-thread-pool-23, block-manager-ask-thread-pool-13, block-manager-storage-async-thread-pool-55, block-manager-storage-async-thread-pool-17, block-manager-storage-async-thread-pool-18, block-manager-storage-async-thread-pool-76, block-manager-storage-async-thread-pool-66, block-manager-storage-async-thread-pool-98, block-manager-storage-async-thread-pool-5, block-manager-storage-async-thread-pool-8, block-manager-storage-async-thread-pool-60, block-manager-storage-async-thread-pool-88, block-manager-ask-thread-pool-97, block-manager-storage-async-thread-pool-54, block-manager-storage-async-thread-pool-38, block-manager-ask-thread-pool-66, block-manager-storage-async-thread-pool-3, block-manager-storage-async-thread-pool-21, block-manager-storage-async-thread-pool-85, block-manager-storage-async-thread-pool-90, block-manager-ask-thread-pool-41, block-manager-storage-async-thread-pool-45, block-manager-storage-async-thread-pool-34, block-manager-storage-async-thread-pool-92, block-manager-storage-async-thread-pool-81, block-manager-storage-async-thread-pool-43, block-manager-storage-async-thread-pool-11, block-manager-ask-thread-pool-24, block-manager-storage-async-thread-pool-16, block-manager-storage-async-thread-pool-22, block-manager-ask-thread-pool-44, block-manager-storage-async-thread-pool-75, block-manager-ask-thread-pool-6, block-manager-storage-async-thread-pool-53, block-manager-storage-async-thread-pool-86, block-manager-storage-async-thread-pool-58 ===== [info] DynamicPartitionPruningSuiteAEOff: [info] - simple inner join triggers DPP with mock-up tables (695 milliseconds) [info] - self-join on a partitioned table should not trigger DPP (248 milliseconds) [info] - static scan metrics (1 second, 418 milliseconds) [info] - DPP should not be rewritten as an existential join (193 milliseconds) [info] - DPP triggers only for certain types of query (357 milliseconds) [info] + Given dynamic partition pruning disabled [info] + Given not a partition column [info] + Given no predicate on the dimension table [info] + Given left-semi join with partition column on the left side [info] + Given left-semi join with partition column on the right side [info] + Given left outer with partition column on the left side [info] + Given right outer join with partition column on the left side [info] - filtering ratio policy fallback (1 second, 210 milliseconds) [info] + Given no stats and selective predicate [info] + Given no stats and selective predicate with the size of dim too large [info] + Given no stats and selective predicate with the size of dim small [info] - filtering ratio policy with stats when the broadcast pruning is disabled (1 second, 70 milliseconds) [info] + Given disabling the use of stats in the DPP heuristic [info] + Given filtering ratio with stats disables pruning [info] + Given filtering ratio with stats enables pruning [info] + Given join condition more complex than fact.attr = dim.attr [info] - partition pruning in broadcast hash joins with non-deterministic probe part (148 milliseconds) [info] + Given alias with simple join condition, and non-deterministic query [info] + Given alias over multiple sub-queries with simple join condition [info] - postgreSQL/aggregates_part1.sql (18 seconds, 217 milliseconds) 09:50:27.319 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:27.319 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:27.346 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:27.346 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:27.459 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:27.460 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:27.475 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:27.475 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:27.613 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:27.613 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:27.669 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:27.669 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:27.796 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:27.797 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:27.822 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:27.822 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:28.609 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:28.609 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - partition pruning in broadcast hash joins with aliases (1 second, 655 milliseconds) [info] + Given alias with simple join condition, using attribute names only [info] + Given alias with expr as join condition [info] + Given alias over multiple sub-queries with simple join condition [info] + Given alias over multiple sub-queries with simple join condition 09:50:28.631 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:28.631 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:29.018 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:29.018 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:29.035 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:29.035 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:29.105 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:29.105 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:29.121 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:29.121 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:29.242 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:29.243 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:29.283 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:29.283 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:29.418 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:29.419 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:29.449 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:29.449 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:30.124 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:30.124 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:30.137 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:30.137 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:30.510 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:30.510 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:30.525 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:30.525 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:30.594 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:30.595 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:30.609 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:30.609 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:30.751 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:30.751 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:30.778 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:30.778 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:30.898 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:30.898 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:30.931 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:30.931 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - partition pruning in broadcast hash joins (2 seconds, 412 milliseconds) [info] + Given disable broadcast pruning and disable subquery duplication [info] + Given disable reuse broadcast results and enable subquery duplication [info] + Given enable reuse broadcast results and disable query duplication [info] + Given disable broadcast hash join and disable query duplication [info] + Given disable broadcast hash join and enable query duplication 09:50:31.564 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:31.564 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:31.576 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 09:50:31.576 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. [info] - postgreSQL/window_part3.sql (4 seconds, 618 milliseconds) 09:50:34.633 ERROR org.apache.spark.executor.Executor: Exception in task 1.0 in stage 7609.0 (TID 9066) java.lang.ArithmeticException: long overflow at java.lang.Math.multiplyExact(Math.java:892) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:50:34.635 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 7609.0 (TID 9065) java.lang.ArithmeticException: long overflow at java.lang.Math.multiplyExact(Math.java:892) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:50:34.637 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 7609.0 (TID 9066) (192.168.10.31 executor driver): java.lang.ArithmeticException: long overflow at java.lang.Math.multiplyExact(Math.java:892) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:50:34.637 ERROR org.apache.spark.scheduler.TaskSetManager: Task 1 in stage 7609.0 failed 1 times; aborting job [info] - broadcast a single key in a HashedRelation (4 seconds, 375 milliseconds) 09:50:35.831 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 7630.0 (TID 9101) java.lang.ArithmeticException: Casting 4567890123456789 to int causes overflow at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:50:35.833 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 7630.0 (TID 9101) (192.168.10.31 executor driver): java.lang.ArithmeticException: Casting 4567890123456789 to int causes overflow at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:50:35.833 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 7630.0 failed 1 times; aborting job 09:50:35.837 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 7630.0 (TID 9102) (192.168.10.31 executor driver): TaskKilled (Stage cancelled) 09:50:35.963 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 7632.0 (TID 9105) java.lang.ArithmeticException: Casting 4567890123456789 to short causes overflow at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:50:35.964 ERROR org.apache.spark.executor.Executor: Exception in task 1.0 in stage 7632.0 (TID 9106) java.lang.ArithmeticException: Casting 4567890123456789 to short causes overflow at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:50:35.965 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 7632.0 (TID 9105) (192.168.10.31 executor driver): java.lang.ArithmeticException: Casting 4567890123456789 to short causes overflow at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373) at org.apache.spark.rdd.RDD.iterator(RDD.scala:337) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 09:50:35.965 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 7632.0 failed 1 times; aborting job [info] - postgreSQL/int8.sql (4 seconds, 883 milliseconds) [info] - broadcast multiple keys in a LongHashedRelation (3 seconds, 333 milliseconds) [info] - broadcast multiple keys in an UnsafeHashedRelation (3 seconds, 528 milliseconds) [info] - different broadcast subqueries with identical children (4 seconds, 240 milliseconds) 09:50:48.161 WARN org.apache.spark.sql.streaming.StreamingQueryManager: Temporary checkpoint location created which is deleted normally when the query didn't fail: /home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/temporary-f2935413-3399-4c03-986f-180d675b5b7b. If it's required to delete it under any circumstances, please set spark.sql.streaming.forceDeleteTempCheckpointLocation to true. Important to know deleting temp checkpoint folder is best effort. [info] - no partition pruning when the build side is a stream (2 seconds, 685 milliseconds) [info] - avoid reordering broadcast join keys to match input hash partitioning (2 seconds, 394 milliseconds) 09:50:52.932 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored: 09:50:53.328 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored: 09:50:53.341 WARN org.apache.spark.sql.execution.datasources.DataSource: All paths were ignored: [info] - dynamic partition pruning ambiguity issue across nested joins (2 seconds, 259 milliseconds) [info] - cleanup any DPP filter that isn't pushed down due to expression id clashes (696 milliseconds) [info] - cleanup any DPP filter that isn't pushed down due to non-determinism (77 milliseconds) [info] - join key with multiple references on the filtering plan (2 seconds, 443 milliseconds) [info] - Make sure dynamic pruning works on uncorrelated queries (1 second, 56 milliseconds) [info] - SPARK-32509: Unused Dynamic Pruning filter shouldn't affect canonicalization and exchange reuse (409 milliseconds) [info] - Plan broadcast pruning only when the broadcast can be reused (574 milliseconds) [info] + Given dynamic pruning filter on the build side [info] + Given dynamic pruning filter on the probe side [info] - SPARK-32659: Fix the data issue when pruning DPP on non-atomic type (3 seconds, 309 milliseconds) [info] - SPARK-32817: DPP throws error when the b