SuccessChanges

Summary

  1. [SPARK-26606][CORE] Handle driver options properly when submitting to (commit: 978b68a35d23c094fd005a1fb6e5ebc10e33f8d0) (details)
  2. [SPARK-27160][SQL] Fix DecimalType when building orc filters (commit: ac683b75abd220d8dd7073c87f848c4b2e64f683) (details)
Commit 978b68a35d23c094fd005a1fb6e5ebc10e33f8d0 by vanzin
[SPARK-26606][CORE] Handle driver options properly when submitting to
standalone cluster mode via legacy Client
This patch fixes the issue that ClientEndpoint in standalone cluster
doesn't recognize about driver options which are passed to SparkConf
instead of system properties. When `Client` is executed via cli they
should be provided as system properties, but with `spark-submit` they
can be provided as SparkConf. (SpartSubmit will call `ClientApp.start`
with SparkConf which would contain these options.)
Manually tested via following steps:
1) setup standalone cluster (launch master and worker via
`./sbin/start-all.sh`)
2) submit one of example app with standalone cluster mode
```
./bin/spark-submit --class org.apache.spark.examples.SparkPi --master
"spark://localhost:7077" --conf
"spark.driver.extraJavaOptions=-Dfoo=BAR" --deploy-mode "cluster"
--num-executors 1 --driver-memory 512m --executor-memory 512m
--executor-cores 1 examples/jars/spark-examples*.jar 10
```
3) check whether `foo=BAR` is provided in system properties in Spark UI
<img width="877" alt="Screen Shot 2019-03-21 at 8 18 04 AM"
src="https://user-images.githubusercontent.com/1317309/54728501-97db1700-4bc1-11e9-89da-078445c71e9b.png">
Closes #24163 from HeartSaVioR/SPARK-26606.
Authored-by: Jungtaek Lim (HeartSaVioR) <kabhwan@gmail.com>
Signed-off-by: Marcelo Vanzin <vanzin@cloudera.com>
(cherry picked from commit 8a9eb05137cd4c665f39a54c30d46c0c4eb7d20b)
Signed-off-by: Marcelo Vanzin <vanzin@cloudera.com>
(commit: 978b68a35d23c094fd005a1fb6e5ebc10e33f8d0)
The file was modifiedcore/src/main/scala/org/apache/spark/deploy/Client.scala (diff)
Commit ac683b75abd220d8dd7073c87f848c4b2e64f683 by dhyun
[SPARK-27160][SQL] Fix DecimalType when building orc filters
DecimalType Literal should not be casted to Long.
eg. For `df.filter("x < 3.14")`, assuming df (x in DecimalType) reads
from a ORC table and uses the native ORC reader with predicate push down
enabled, we will push down the `x < 3.14` predicate to the ORC reader
via a SearchArgument.
OrcFilters will construct the SearchArgument, but not handle the
DecimalType correctly.
The previous impl will construct `x < 3` from `x < 3.14`.
```
$ sbt
> sql/testOnly *OrcFilterSuite
> sql/testOnly *OrcQuerySuite -- -z "27160"
```
Closes #24092 from sadhen/spark27160.
Authored-by: Darcy Shen <sadhen@zoho.com> Signed-off-by: Dongjoon Hyun
<dhyun@apple.com>
(cherry picked from commit f3ba73a5f54cc233424cee4fdfd3a61674b2b48e)
Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
(commit: ac683b75abd220d8dd7073c87f848c4b2e64f683)
The file was modifiedsql/core/src/test/scala/org/apache/spark/sql/execution/datasources/orc/OrcFilterSuite.scala (diff)
The file was modifiedsql/core/src/main/scala/org/apache/spark/sql/execution/datasources/orc/OrcFilters.scala (diff)
The file was modifiedsql/core/src/test/scala/org/apache/spark/sql/execution/datasources/orc/OrcQuerySuite.scala (diff)