SuccessChanges

Summary

  1. [SPARK-30285][CORE] Fix deadlock between LiveListenerBus#stop and (details)
  2. [SPARK-29930][SQL][FOLLOW-UP] Allow only default value to be set for (details)
Commit 10cae04108c375a7f5ca7685fea593bd7f49f7a6 by vanzin
[SPARK-30285][CORE] Fix deadlock between LiveListenerBus#stop and
AsyncEventQueue#removeListenerOnError
### What changes were proposed in this pull request?
There is a deadlock between `LiveListenerBus#stop` and
`AsyncEventQueue#removeListenerOnError`.
We can reproduce as follows:
1. Post some events to `LiveListenerBus` 2. Call `LiveListenerBus#stop`
and hold the synchronized lock of
`bus`(https://github.com/apache/spark/blob/5e92301723464d0876b5a7eec59c15fed0c5b98c/core/src/main/scala/org/apache/spark/scheduler/LiveListenerBus.scala#L229),
waiting until all the events are processed by listeners, then remove all
the queues 3. Event queue would drain out events by posting to its
listeners. If a listener is interrupted, it will call
`AsyncEventQueue#removeListenerOnError`,  inside it will call
`bus.removeListener`(https://github.com/apache/spark/blob/7b1b60c7583faca70aeab2659f06d4e491efa5c0/core/src/main/scala/org/apache/spark/scheduler/AsyncEventQueue.scala#L207),
trying to acquire synchronized lock of bus, resulting in deadlock
This PR  removes the `synchronized` from `LiveListenerBus.stop` because
underlying data structures themselves are thread-safe.
### Why are the changes needed? To fix deadlock.
### Does this PR introduce any user-facing change? No.
### How was this patch tested? New UT.
Closes #26924 from wangshuo128/event-queue-race-condition.
Authored-by: Wang Shuo <wangshuo128@gmail.com> Signed-off-by: Marcelo
Vanzin <vanzin@cloudera.com>
The file was modifiedcore/src/main/scala/org/apache/spark/scheduler/LiveListenerBus.scala (diff)
The file was modifiedcore/src/test/scala/org/apache/spark/scheduler/SparkListenerSuite.scala (diff)
Commit a469976e6e1eb0b8f94c90b885f6c2f7795a2f01 by gurwls223
[SPARK-29930][SQL][FOLLOW-UP] Allow only default value to be set for
removed SQL configs
### What changes were proposed in this pull request? In the PR, I
propose to throw `AnalysisException` when a removed SQL config is set to
non-default value. The following SQL configs removed by #26559 are
marked as removed: 1. `spark.sql.fromJsonForceNullableSchema` 2.
`spark.sql.legacy.compareDateTimestampInTimestamp` 3.
`spark.sql.legacy.allowCreatingManagedTableUsingNonemptyLocation`
### Why are the changes needed? To improve user experience with Spark
SQL by notifying of removed SQL configs used by users.
### Does this PR introduce any user-facing change? Yes, before the `set`
command was silently ignored:
```sql spark-sql> set spark.sql.fromJsonForceNullableSchema=false;
spark.sql.fromJsonForceNullableSchema false
``` after the exception should be raised:
```sql spark-sql> set spark.sql.fromJsonForceNullableSchema=false; Error
in query: The SQL config 'spark.sql.fromJsonForceNullableSchema' was
removed in the version 3.0.0. It was removed to prevent errors like
SPARK-23173 for non-default value.;
```
### How was this patch tested? Added new tests into `SQLConfSuite` for
both cases when removed SQL configs are set to default and non-default
values.
Closes #27057 from MaxGekk/remove-sql-configs-followup.
Authored-by: Maxim Gekk <max.gekk@gmail.com> Signed-off-by: HyukjinKwon
<gurwls223@apache.org>
The file was modifiedsql/core/src/test/scala/org/apache/spark/sql/internal/SQLConfSuite.scala (diff)
The file was modifiedsql/core/src/main/scala/org/apache/spark/sql/RuntimeConfig.scala (diff)
The file was modifiedsql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala (diff)