FailedChanges

Summary

  1. [SPARK-25646][K8S] Fix docker-image-tool.sh on dev build. (details)
  2. [SPARK-25655][BUILD] Add -Pspark-ganglia-lgpl to the scala style check. (details)
  3. [SPARK-25202][SQL] Implements split with limit sql function (details)
  4. [SPARK-25600][SQL][MINOR] Make use of (details)
  5. [SPARK-25621][SPARK-25622][TEST] Reduce test time of (details)
Commit 58287a39864db463eeef17d1152d664be021d9ef by dongjoon
[SPARK-25646][K8S] Fix docker-image-tool.sh on dev build.
The docker file was referencing a path that only existed in the
distribution tarball; it needs to be parameterized so that the right
path can be used in a dev build.
Tested on local dev build.
Closes #22634 from vanzin/SPARK-25646.
Authored-by: Marcelo Vanzin <vanzin@cloudera.com> Signed-off-by:
Dongjoon Hyun <dongjoon@apache.org>
The file was modifiedresource-managers/kubernetes/docker/src/main/dockerfiles/spark/Dockerfile (diff)
The file was modifiedbin/docker-image-tool.sh (diff)
Commit 44cf800c831588b1f7940dd8eef7ecb6cde28f23 by hyukjinkwon
[SPARK-25655][BUILD] Add -Pspark-ganglia-lgpl to the scala style check.
## What changes were proposed in this pull request? Our lint failed due
to the following errors:
```
[INFO] --- scalastyle-maven-plugin:1.0.0:check (default)
spark-ganglia-lgpl_2.11 --- error
file=/home/jenkins/workspace/spark-master-maven-snapshots/spark/external/spark-ganglia-lgpl/src/main/scala/org/apache/spark/metrics/sink/GangliaSink.scala
message=
     Are you sure that you want to use toUpperCase or toLowerCase
without the root locale? In most cases, you
     should use toUpperCase(Locale.ROOT) or toLowerCase(Locale.ROOT)
instead.
     If you must use toUpperCase or toLowerCase without the root locale,
wrap the code block with
     // scalastyle:off caselocale
     .toUpperCase
     .toLowerCase
     // scalastyle:on caselocale
    line=67 column=49 error
file=/home/jenkins/workspace/spark-master-maven-snapshots/spark/external/spark-ganglia-lgpl/src/main/scala/org/apache/spark/metrics/sink/GangliaSink.scala
message=
     Are you sure that you want to use toUpperCase or toLowerCase
without the root locale? In most cases, you
     should use toUpperCase(Locale.ROOT) or toLowerCase(Locale.ROOT)
instead.
     If you must use toUpperCase or toLowerCase without the root locale,
wrap the code block with
     // scalastyle:off caselocale
     .toUpperCase
     .toLowerCase
     // scalastyle:on caselocale
    line=71 column=32 Saving to
outputFile=/home/jenkins/workspace/spark-master-maven-snapshots/spark/external/spark-ganglia-lgpl/target/scalastyle-output.xml
```
See
https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/job/spark-master-lint/8890/
## How was this patch tested? N/A
Closes #22647 from gatorsmile/fixLint.
Authored-by: gatorsmile <gatorsmile@gmail.com> Signed-off-by:
hyukjinkwon <gurwls223@apache.org>
The file was modifieddev/scalastyle (diff)
The file was modifiedexternal/spark-ganglia-lgpl/src/main/scala/org/apache/spark/metrics/sink/GangliaSink.scala (diff)
Commit 17781d75308c328b11cab3658ca4f358539414f2 by hyukjinkwon
[SPARK-25202][SQL] Implements split with limit sql function
## What changes were proposed in this pull request?
Adds support for the setting limit in the sql split function
## How was this patch tested?
1. Updated unit tests 2. Tested using Scala spark shell
Please review http://spark.apache.org/contributing.html before opening a
pull request.
Closes #22227 from phegstrom/master.
Authored-by: Parker Hegstrom <phegstrom@palantir.com> Signed-off-by:
hyukjinkwon <gurwls223@apache.org>
The file was modifiedcommon/unsafe/src/main/java/org/apache/spark/unsafe/types/UTF8String.java (diff)
The file was modifiedsql/core/src/test/scala/org/apache/spark/sql/StringFunctionsSuite.scala (diff)
The file was modifiedsql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/RegexpExpressionsSuite.scala (diff)
The file was modifiedsql/core/src/test/resources/sql-tests/inputs/string-functions.sql (diff)
The file was modifiedR/pkg/R/functions.R (diff)
The file was modifiedsql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/regexpExpressions.scala (diff)
The file was modifiedR/pkg/R/generics.R (diff)
The file was modifiedsql/core/src/main/scala/org/apache/spark/sql/functions.scala (diff)
The file was modifiedpython/pyspark/sql/functions.py (diff)
The file was modifiedR/pkg/tests/fulltests/test_sparkSQL.R (diff)
The file was modifiedcommon/unsafe/src/test/java/org/apache/spark/unsafe/types/UTF8StringSuite.java (diff)
The file was modifiedsql/core/src/test/resources/sql-tests/results/string-functions.sql.out (diff)
Commit f2f4e7afe730badaf443f459b27fe40879947d51 by hyukjinkwon
[SPARK-25600][SQL][MINOR] Make use of
TypeCoercion.findTightestCommonType while inferring CSV schema.
## What changes were proposed in this pull request? Current the CSV's
infer schema code inlines `TypeCoercion.findTightestCommonType`. This is
a minor refactor to make use of the common type coercion code when
applicable.  This way we can take advantage of any improvement to the
base method.
Thanks to MaxGekk for finding this while reviewing another PR.
## How was this patch tested? This is a minor refactor.  Existing tests
are used to verify the change.
Closes #22619 from dilipbiswal/csv_minor.
Authored-by: Dilip Biswal <dbiswal@us.ibm.com> Signed-off-by:
hyukjinkwon <gurwls223@apache.org>
The file was modifiedsql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVInferSchema.scala (diff)
Commit 1ee472eec15e104c4cd087179a9491dc542e15d7 by hyukjinkwon
[SPARK-25621][SPARK-25622][TEST] Reduce test time of
BucketedReadWithHiveSupportSuite
## What changes were proposed in this pull request?
By replacing loops with random possible value.
- `read partitioning bucketed tables with bucket pruning filters` reduce
from 55s to 7s
- `read partitioning bucketed tables having composite filters` reduce
from 54s to 8s
- total time: reduce from 288s to 192s
## How was this patch tested?
Unit test
Closes #22640 from gengliangwang/fastenBucketedReadSuite.
Authored-by: Gengliang Wang <gengliang.wang@databricks.com>
Signed-off-by: hyukjinkwon <gurwls223@apache.org>
The file was modifiedsql/core/src/test/scala/org/apache/spark/sql/sources/BucketedReadSuite.scala (diff)