SuccessChanges

Summary

  1. [SPARK-25816][SQL] Fix attribute resolution in nested extractors (commit: 53aeb3d6587a04b0b7f7e454fa3e2a88aee1ba98) (details)
  2. [SPARK-25797][SQL][DOCS][BACKPORT-2.3] Add migration doc for solving (commit: 3e0160bacfbe4597f15ca410ca832617cdeeddca) (details)
  3. [DOC] Fix doc for spark.sql.parquet.recordLevelFilter.enabled (commit: 632c0d911c1bbdc715fe476ea49db9bfd387517f) (details)
Commit 53aeb3d6587a04b0b7f7e454fa3e2a88aee1ba98 by gatorsmile
[SPARK-25816][SQL] Fix attribute resolution in nested extractors
Extractors are made of 2 expressions, one of them defines the the value
to be extract from (called `child`) and the other defines the way of
extraction (called `extraction`). In this term extractors have 2
children so they shouldn't be `UnaryExpression`s.
`ResolveReferences` was changed in this commit:
https://github.com/apache/spark/commit/36b826f5d17ae7be89135cb2c43ff797f9e7fe48
which resulted a regression with nested extractors. An extractor need to
define its children as the set of both `child` and `extraction`; and
should try to resolve both in `ResolveReferences`.
This PR changes `UnresolvedExtractValue` to a `BinaryExpression`.
added UT
Closes #22817 from peter-toth/SPARK-25816.
Authored-by: Peter Toth <peter.toth@gmail.com> Signed-off-by: gatorsmile
<gatorsmile@gmail.com>
(cherry picked from commit ca2fca143277deaff58a69b7f1e0360cfc70561f)
Signed-off-by: gatorsmile <gatorsmile@gmail.com>
(commit: 53aeb3d6587a04b0b7f7e454fa3e2a88aee1ba98)
The file was modifiedsql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala (diff)
The file was modifiedsql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/unresolved.scala (diff)
Commit 3e0160bacfbe4597f15ca410ca832617cdeeddca by dongjoon
[SPARK-25797][SQL][DOCS][BACKPORT-2.3] Add migration doc for solving
issues caused by view canonicalization approach change
## What changes were proposed in this pull request? Since Spark 2.2,
view definitions are stored in a different way from prior versions. This
may cause Spark unable to read views created by prior versions. See
[SPARK-25797](https://issues.apache.org/jira/browse/SPARK-25797) for
more details.
Basically, we have 2 options. 1) Make Spark 2.2+ able to get older view
definitions back. Since the expanded text is buggy and unusable, we have
to use original text (this is possible with
[SPARK-25459](https://issues.apache.org/jira/browse/SPARK-25459)).
However, because older Spark versions don't save the context for the
database, we cannot always get correct view definitions without view
default database. 2) Recreate the views by `ALTER VIEW AS` or `CREATE OR
REPLACE VIEW AS`.
This PR aims to add migration doc to help users troubleshoot this issue
by above option 2.
## How was this patch tested? N/A.
Docs are generated and checked locally
``` cd docs SKIP_API=1 jekyll serve --watch
```
Closes #22851 from seancxmao/SPARK-25797-2.3.
Authored-by: seancxmao <seancxmao@gmail.com> Signed-off-by: Dongjoon
Hyun <dongjoon@apache.org>
(commit: 3e0160bacfbe4597f15ca410ca832617cdeeddca)
The file was modifieddocs/sql-programming-guide.md (diff)
Commit 632c0d911c1bbdc715fe476ea49db9bfd387517f by wenchen
[DOC] Fix doc for spark.sql.parquet.recordLevelFilter.enabled
## What changes were proposed in this pull request?
Updated the doc string value for
spark.sql.parquet.recordLevelFilter.enabled to indicate that
spark.sql.parquet.enableVectorizedReader must be disabled.
The code in ParquetFileFormat uses
spark.sql.parquet.recordLevelFilter.enabled only after falling back to
parquet-mr (see else for this if statement):
https://github.com/apache/spark/blob/d5573c578a1eea9ee04886d9df37c7178e67bb30/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala#L412
https://github.com/apache/spark/blob/d5573c578a1eea9ee04886d9df37c7178e67bb30/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala#L427-L430
Tests also bear this out.
## How was this patch tested?
This is just a doc string fix: I built Spark and ran a single test.
Closes #22865 from bersprockets/confdocfix.
Authored-by: Bruce Robbins <bersprockets@gmail.com> Signed-off-by:
Wenchen Fan <wenchen@databricks.com>
(cherry picked from commit 4e990d9dd2407dc257712c4b12b507f0990ca4e9)
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
(commit: 632c0d911c1bbdc715fe476ea49db9bfd387517f)
The file was modifiedsql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala (diff)