1. [SPARK-25454][SQL] add a new config for picking minimum precision for (commit: 26d893a4f64de18222942568f7735114447a6ab7) (details)
  2. [SPARK-25536][CORE] metric value for METRIC_OUTPUT_RECORDS_WRITTEN is (commit: f40e4c71cdb46392648c35a2f2cb0de140f3c5a8) (details)
  3. [SPARK-25533][CORE][WEBUI] AppSummary should hold the information about (commit: f13565b6ec2de2e3304b42de3a2e61da6a8ff3b0) (details)
Commit 26d893a4f64de18222942568f7735114447a6ab7 by gatorsmile
[SPARK-25454][SQL] add a new config for picking minimum precision for
integral literals
## What changes were proposed in this pull request? proposed to allow precision
lose during decimal operations, to reduce the possibilities of overflow.
This is a behavior change and is protected by the
DECIMAL_OPERATIONS_ALLOW_PREC_LOSS config. However, that PR introduced
another behavior change: pick a minimum precision for integral literals,
which is not protected by a config. This PR add a new config for it:
This can allow users to work around issue in SPARK-25454, which is
caused by a long-standing bug of negative scale.
## How was this patch tested?
a new test
Closes #22494 from cloud-fan/decimal.
Authored-by: Wenchen Fan <> Signed-off-by:
gatorsmile <>
(cherry picked from commit d0990e3dfee752a6460a6360e1a773138364d774)
Signed-off-by: gatorsmile <>
(commit: 26d893a4f64de18222942568f7735114447a6ab7)
The file was modifiedsql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala (diff)
The file was modifiedsql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala (diff)
The file was modifiedsql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/DecimalPrecision.scala (diff)
Commit f40e4c71cdb46392648c35a2f2cb0de140f3c5a8 by dongjoon
## What changes were proposed in this pull request? changed metric value
'task.metrics.inputMetrics.recordsRead' to
'task.metrics.outputMetrics.recordsWritten'. This bug was introduced in
## How was this patch tested? Existing tests
Closes #22555 from shahidki31/SPARK-25536.
Authored-by: Shahid <> Signed-off-by: Dongjoon Hyun
(cherry picked from commit 5def10e61e49dba85f4d8b39c92bda15137990a2)
Signed-off-by: Dongjoon Hyun <>
(commit: f40e4c71cdb46392648c35a2f2cb0de140f3c5a8)
The file was modifiedcore/src/main/scala/org/apache/spark/executor/Executor.scala (diff)
Commit f13565b6ec2de2e3304b42de3a2e61da6a8ff3b0 by vanzin
[SPARK-25533][CORE][WEBUI] AppSummary should hold the information about
succeeded Jobs and completed stages only
Currently, In the spark UI, when there are failed jobs or failed stages,
display message for the completed jobs and completed stages are not
consistent with the previous versions of spark. Reason is because,
AppSummary holds the information about all the jobs and stages. But, In
the below code, it checks against the completedJobs and completedStages.
So, AppSummary should hold only successful jobs and stages.
So, we should  keep only completed jobs and stage information in the
AppSummary, to make it consistent with Spark2.2
Test steps:
``` sc.parallelize(1 to 5, 5).collect() sc.parallelize(1 to 5, 2).map{ x
=> throw new RuntimeException("Fail")}.collect()
**Before fix:**
![screenshot from 2018-09-26
![screenshot from 2018-09-26
**After fix:**
![screenshot from 2018-09-26
![screenshot from 2018-09-26
Closes #22549 from shahidki31/SPARK-25533.
Authored-by: Shahid <> Signed-off-by: Marcelo Vanzin
(cherry picked from commit 5ee21661834e837d414bc20591982a092c0aece3)
Signed-off-by: Marcelo Vanzin <>
(commit: f13565b6ec2de2e3304b42de3a2e61da6a8ff3b0)
The file was modifiedcore/src/main/scala/org/apache/spark/status/AppStatusListener.scala (diff)