FailedConsole Output

Skipping 2,534 KB.. Full Log
rFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 16:58:23 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 16:58:24 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 16:58:24 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 16:58:28 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 16:58:29 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 16:58:29 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 16:58:33 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 16:58:35 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 16:58:35 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 16:58:37 INFO KubernetesClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000000000(ns)
  20/09/17 16:58:38 INFO SparkContext: Starting job: collect at SparkRemoteFileTest.scala:43
  20/09/17 16:58:38 INFO DAGScheduler: Got job 0 (collect at SparkRemoteFileTest.scala:43) with 2 output partitions
  20/09/17 16:58:38 INFO DAGScheduler: Final stage: ResultStage 0 (collect at SparkRemoteFileTest.scala:43)
  20/09/17 16:58:38 INFO DAGScheduler: Parents of final stage: List()
  20/09/17 16:58:38 INFO DAGScheduler: Missing parents: List()
  20/09/17 16:58:38 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at SparkRemoteFileTest.scala:38), which has no missing parents
  20/09/17 16:58:38 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 3.2 KiB, free 593.9 MiB)
  20/09/17 16:58:38 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1877.0 B, free 593.9 MiB)
  20/09/17 16:58:38 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on spark-test-app-a93649749d00309b-driver-svc.b9a001a0655d4d06b1055948a9dfba96.svc:7079 (size: 1877.0 B, free: 593.9 MiB)
  20/09/17 16:58:38 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1348
  20/09/17 16:58:38 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at SparkRemoteFileTest.scala:38) (first 15 tasks are for partitions Vector(0, 1))
  20/09/17 16:58:38 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks resource profile 0
  20/09/17 16:58:39 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 16:58:41 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 16:58:41 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 16:58:45 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 16:58:46 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 16:58:46 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 16:58:50 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 16:58:52 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 16:58:52 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 16:58:53 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
  20/09/17 16:58:55 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 16:58:57 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 16:58:57 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 16:59:00 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 16:59:02 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 16:59:02 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 16:59:05 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 16:59:08 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 16:59:08 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 16:59:08 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
  20/09/17 16:59:11 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 16:59:13 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 16:59:13 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 16:59:16 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 16:59:18 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 16:59:18 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 16:59:22 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 16:59:23 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
  20/09/17 16:59:24 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 16:59:24 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 16:59:28 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 16:59:30 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 16:59:30 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 16:59:34 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 16:59:35 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 16:59:35 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 16:59:38 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
  20/09/17 16:59:39 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 16:59:41 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 16:59:41 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 16:59:45 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 16:59:47 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 16:59:47 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 16:59:51 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 16:59:53 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 16:59:53 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 16:59:53 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
  20/09/17 16:59:57 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 16:59:58 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 16:59:58 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 17:00:02 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:00:04 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:00:04 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 17:00:08 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:00:08 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
  20/09/17 17:00:09 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:00:09 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 17:00:13 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:00:14 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:00:14 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 17:00:18 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:00:19 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:00:19 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 17:00:23 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:00:23 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
  20/09/17 17:00:25 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:00:25 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 17:00:29 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:00:30 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:00:30 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 17:00:34 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:00:35 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:00:35 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 17:00:38 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
  20/09/17 17:00:39 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:00:40 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:00:40 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 17:00:44 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:00:45 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:00:45 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 17:00:49 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:00:51 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:00:51 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 17:00:53 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
  20/09/17 17:00:55 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:00:56 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:00:56 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 17:01:00 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:01:01 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:01:01 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 17:01:05 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:01:07 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:01:07 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 17:01:08 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
  20/09/17 17:01:11 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:01:13 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:01:13 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 17:01:17 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:01:18 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:01:18 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 17:01:22 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:01:23 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
  20/09/17 17:01:23 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:01:23 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 17:01:27 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:01:28 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:01:28 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 17:01:32 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:01:33 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:01:33 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 17:01:37 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:01:38 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
  20/09/17 17:01:38 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:01:38 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 17:01:42 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:01:44 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:01:44 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 17:01:47 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:01:49 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:01:49 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 17:01:52 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:01:53 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
  20/09/17 17:01:54 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:01:54 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 17:01:57 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:01:59 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:01:59 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  20/09/17 17:02:02 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:02:05 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:02:05 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
  " did not contain "Mounting of tmp3647227692728330174.txt was true" The application did not complete, did not find str Mounting of tmp3647227692728330174.txt was true. (KubernetesSuite.scala:387)
- Test basic decommissioning *** FAILED ***
  The code passed to eventually never returned normally. Attempted 183 times over 3.016905734916667 minutes. Last failure message: "++ id -u
  + myuid=185
  ++ id -g
  + mygid=0
  + set +e
  ++ getent passwd 185
  + uidentry=
  + set -e
  + '[' -z '' ']'
  + '[' -w /etc/passwd ']'
  + echo '185:x:185:0:anonymous uid:/opt/spark:/bin/false'
  + SPARK_CLASSPATH=':/opt/spark/jars/*'
  + env
  + grep SPARK_JAVA_OPT_
  + sort -t_ -k4 -n
  + sed 's/[^=]*=\(.*\)/\1/g'
  + readarray -t SPARK_EXECUTOR_JAVA_OPTS
  + '[' -n '' ']'
  + '[' 3 == 2 ']'
  + '[' 3 == 3 ']'
  ++ python3 -V
  + pyv3='Python 3.7.3'
  + export PYTHON_VERSION=3.7.3
  + PYTHON_VERSION=3.7.3
  + export PYSPARK_PYTHON=python3
  + PYSPARK_PYTHON=python3
  + export PYSPARK_DRIVER_PYTHON=python3
  + PYSPARK_DRIVER_PYTHON=python3
  + '[' -n '' ']'
  + '[' -z ']'
  + case "$1" in
  + shift 1
  + CMD=("$SPARK_HOME/bin/spark-submit" --conf "spark.driver.bindAddress=$SPARK_DRIVER_BIND_ADDRESS" --deploy-mode client "$@")
  + exec /usr/bin/tini -s -- /opt/spark/bin/spark-submit --conf spark.driver.bindAddress=172.17.0.4 --deploy-mode client --properties-file /opt/spark/conf/spark.properties --class org.apache.spark.deploy.PythonRunner local:///opt/spark/tests/decommissioning.py
  20/09/17 17:02:38 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  Starting decom test
  Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
  20/09/17 17:02:39 INFO SparkContext: Running Spark version 3.1.0-SNAPSHOT
  20/09/17 17:02:39 INFO ResourceUtils: ==============================================================
  20/09/17 17:02:39 INFO ResourceUtils: No custom resources configured for spark.driver.
  20/09/17 17:02:39 INFO ResourceUtils: ==============================================================
  20/09/17 17:02:39 INFO SparkContext: Submitted application: PyMemoryTest
  20/09/17 17:02:39 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
  20/09/17 17:02:39 INFO ResourceProfile: Limiting resource is cpus at 1 tasks per executor
  20/09/17 17:02:39 INFO ResourceProfileManager: Added ResourceProfile id: 0
  20/09/17 17:02:39 INFO SecurityManager: Changing view acls to: 185,jenkins
  20/09/17 17:02:39 INFO SecurityManager: Changing modify acls to: 185,jenkins
  20/09/17 17:02:39 INFO SecurityManager: Changing view acls groups to: 
  20/09/17 17:02:39 INFO SecurityManager: Changing modify acls groups to: 
  20/09/17 17:02:39 INFO SecurityManager: SecurityManager: authentication enabled; ui acls disabled; users  with view permissions: Set(185, jenkins); groups with view permissions: Set(); users  with modify permissions: Set(185, jenkins); groups with modify permissions: Set()
  20/09/17 17:02:40 INFO Utils: Successfully started service 'sparkDriver' on port 7078.
  20/09/17 17:02:40 INFO SparkEnv: Registering MapOutputTracker
  20/09/17 17:02:40 INFO SparkEnv: Registering BlockManagerMaster
  20/09/17 17:02:40 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
  20/09/17 17:02:40 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
  20/09/17 17:02:40 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
  20/09/17 17:02:40 INFO DiskBlockManager: Created local directory at /var/data/spark-f6025b13-f871-4355-8a25-fb4236306213/blockmgr-8e695dda-7792-4785-afcb-bb8b10007bb8
  20/09/17 17:02:40 INFO MemoryStore: MemoryStore started with capacity 593.9 MiB
  20/09/17 17:02:40 INFO SparkEnv: Registering OutputCommitCoordinator
  20/09/17 17:02:40 INFO Utils: Successfully started service 'SparkUI' on port 4040.
  20/09/17 17:02:40 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://spark-test-app-62a1fa749d048c26-driver-svc.b9a001a0655d4d06b1055948a9dfba96.svc:4040
  20/09/17 17:02:40 INFO SparkKubernetesClientFactory: Auto-configuring K8S client using current context from users K8S config file
  20/09/17 17:02:42 INFO ExecutorPodsAllocator: Going to request 3 executors from Kubernetes.
  20/09/17 17:02:42 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 7079.
  20/09/17 17:02:42 INFO NettyBlockTransferService: Server created on spark-test-app-62a1fa749d048c26-driver-svc.b9a001a0655d4d06b1055948a9dfba96.svc:7079
  20/09/17 17:02:42 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
  20/09/17 17:02:42 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, spark-test-app-62a1fa749d048c26-driver-svc.b9a001a0655d4d06b1055948a9dfba96.svc, 7079, None)
  20/09/17 17:02:42 INFO BlockManagerMasterEndpoint: Registering block manager spark-test-app-62a1fa749d048c26-driver-svc.b9a001a0655d4d06b1055948a9dfba96.svc:7079 with 593.9 MiB RAM, BlockManagerId(driver, spark-test-app-62a1fa749d048c26-driver-svc.b9a001a0655d4d06b1055948a9dfba96.svc, 7079, None)
  20/09/17 17:02:42 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, spark-test-app-62a1fa749d048c26-driver-svc.b9a001a0655d4d06b1055948a9dfba96.svc, 7079, None)
  20/09/17 17:02:42 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, spark-test-app-62a1fa749d048c26-driver-svc.b9a001a0655d4d06b1055948a9dfba96.svc, 7079, None)
  20/09/17 17:02:42 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:02:43 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:02:43 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:02:47 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:02:47 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:02:51 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes.
  20/09/17 17:02:51 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:02:51 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:02:52 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:02:55 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:02:57 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes.
  20/09/17 17:02:57 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:02:57 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:02:58 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:03:01 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:03:02 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes.
  20/09/17 17:03:02 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:03:02 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:03:03 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:03:05 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:03:07 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:03:07 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:03:08 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:03:10 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes.
  20/09/17 17:03:10 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:03:10 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:03:11 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:03:12 INFO KubernetesClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000000000(ns)
  20/09/17 17:03:12 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/opt/spark/work-dir/spark-warehouse').
  20/09/17 17:03:12 INFO SharedState: Warehouse path is 'file:/opt/spark/work-dir/spark-warehouse'.
  20/09/17 17:03:14 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:03:14 INFO SparkContext: Starting job: collect at /opt/spark/tests/decommissioning.py:44
  20/09/17 17:03:14 INFO DAGScheduler: Registering RDD 2 (groupByKey at /opt/spark/tests/decommissioning.py:43) as input to shuffle 0
  20/09/17 17:03:14 INFO DAGScheduler: Got job 0 (collect at /opt/spark/tests/decommissioning.py:44) with 5 output partitions
  20/09/17 17:03:14 INFO DAGScheduler: Final stage: ResultStage 1 (collect at /opt/spark/tests/decommissioning.py:44)
  20/09/17 17:03:14 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 0)
  20/09/17 17:03:14 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 0)
  20/09/17 17:03:14 INFO DAGScheduler: Submitting ShuffleMapStage 0 (PairwiseRDD[2] at groupByKey at /opt/spark/tests/decommissioning.py:43), which has no missing parents
  20/09/17 17:03:14 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 10.6 KiB, free 593.9 MiB)
  20/09/17 17:03:14 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 6.5 KiB, free 593.9 MiB)
  20/09/17 17:03:14 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on spark-test-app-62a1fa749d048c26-driver-svc.b9a001a0655d4d06b1055948a9dfba96.svc:7079 (size: 6.5 KiB, free: 593.9 MiB)
  20/09/17 17:03:14 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1348
  20/09/17 17:03:14 INFO DAGScheduler: Submitting 5 missing tasks from ShuffleMapStage 0 (PairwiseRDD[2] at groupByKey at /opt/spark/tests/decommissioning.py:43) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4))
  20/09/17 17:03:14 INFO TaskSchedulerImpl: Adding task set 0.0 with 5 tasks resource profile 0
  20/09/17 17:03:15 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes.
  20/09/17 17:03:15 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:03:15 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:03:16 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:03:19 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:03:20 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes.
  20/09/17 17:03:20 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:03:20 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:03:21 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:03:24 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:03:25 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes.
  20/09/17 17:03:25 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:03:25 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:03:26 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:03:29 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:03:29 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
  20/09/17 17:03:30 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes.
  20/09/17 17:03:30 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:03:30 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:03:31 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:03:34 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:03:35 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes.
  20/09/17 17:03:35 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:03:35 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:03:36 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:03:39 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:03:39 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:03:39 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:03:40 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:03:42 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes.
  20/09/17 17:03:42 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:03:42 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:03:44 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:03:44 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
  20/09/17 17:03:46 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:03:48 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes.
  20/09/17 17:03:48 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:03:48 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:03:49 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:03:52 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:03:54 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes.
  20/09/17 17:03:54 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:03:54 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:03:55 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:03:57 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:03:59 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:03:59 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:03:59 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:03:59 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
  20/09/17 17:04:02 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes.
  20/09/17 17:04:02 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:04:02 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:04:02 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:04:06 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:04:07 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:04:07 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:04:08 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:04:10 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes.
  20/09/17 17:04:10 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:04:10 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:04:11 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:04:13 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:04:14 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
  20/09/17 17:04:15 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes.
  20/09/17 17:04:15 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:04:15 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:04:16 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:04:19 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:04:20 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:04:20 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:04:20 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:04:23 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes.
  20/09/17 17:04:23 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:04:23 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:04:24 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:04:27 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:04:27 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:04:27 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:04:28 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:04:29 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
  20/09/17 17:04:30 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes.
  20/09/17 17:04:30 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:04:30 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:04:31 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:04:34 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:04:35 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:04:35 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:04:36 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:04:38 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes.
  20/09/17 17:04:38 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:04:38 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:04:39 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:04:42 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:04:43 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:04:43 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:04:44 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:04:44 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
  20/09/17 17:04:46 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes.
  20/09/17 17:04:46 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:04:46 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:04:47 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:04:50 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:04:51 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes.
  20/09/17 17:04:51 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:04:51 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:04:52 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:04:55 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:04:56 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:04:56 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:04:57 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:04:59 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes.
  20/09/17 17:04:59 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:04:59 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:04:59 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
  20/09/17 17:05:00 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:05:03 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:05:04 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes.
  20/09/17 17:05:04 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:05:04 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:05:06 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:05:08 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:05:09 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:05:09 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:05:10 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:05:12 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes.
  20/09/17 17:05:12 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:05:12 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:05:13 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:05:14 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
  20/09/17 17:05:16 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:05:17 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:05:17 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:05:19 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:05:20 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes.
  20/09/17 17:05:20 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:05:21 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:05:22 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:05:25 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:05:26 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:05:26 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:05:26 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:05:29 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes.
  20/09/17 17:05:29 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:05:29 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:05:29 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
  20/09/17 17:05:30 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:05:32 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:05:34 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes.
  20/09/17 17:05:34 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:05:34 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:05:35 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:05:38 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:05:39 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes.
  20/09/17 17:05:39 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:05:40 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  20/09/17 17:05:42 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes.
  20/09/17 17:05:42 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:05:42 INFO BasicExecutorFeatureStep: Adding decommission script to lifecycle
  20/09/17 17:05:43 ERROR Inbox: Ignoring error
  org.apache.spark.SparkException: Unsupported message RetrieveSparkAppConfig(0)
  	at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint$$anonfun$receiveAndReply$1.applyOrElse(KubernetesClusterSchedulerBackend.scala:203)
  	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
  	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
  	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
  	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
  	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
  	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  	at java.lang.Thread.run(Thread.java:748)
  " did not contain "Finished waiting, stopping Spark" The application did not complete, did not find str Finished waiting, stopping Spark. (KubernetesSuite.scala:387)
Run completed in 1 hour, 3 minutes, 8 seconds.
Total number of tests run: 18
Suites: completed 2, aborted 0
Tests: succeeded 1, failed 17, canceled 0, ignored 0, pending 0
*** 17 TESTS FAILED ***
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for Spark Project Parent POM 3.1.0-SNAPSHOT:
[INFO] 
[INFO] Spark Project Parent POM ........................... SUCCESS [  3.718 s]
[INFO] Spark Project Tags ................................. SUCCESS [  8.247 s]
[INFO] Spark Project Local DB ............................. SUCCESS [  4.105 s]
[INFO] Spark Project Networking ........................... SUCCESS [  5.427 s]
[INFO] Spark Project Shuffle Streaming Service ............ SUCCESS [  3.230 s]
[INFO] Spark Project Unsafe ............................... SUCCESS [ 10.570 s]
[INFO] Spark Project Launcher ............................. SUCCESS [  3.552 s]
[INFO] Spark Project Core ................................. SUCCESS [02:23 min]
[INFO] Spark Project Kubernetes Integration Tests ......... FAILURE [  01:06 h]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  01:09 h
[INFO] Finished at: 2020-09-17T10:06:10-07:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.scalatest:scalatest-maven-plugin:2.0.0:test (integration-test) on project spark-kubernetes-integration-tests_2.12: There are test failures -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <args> -rf :spark-kubernetes-integration-tests_2.12
+ retcode3=1
+ kill -9 50258
+ minikube stop
:   Stopping "minikube" in kvm2 ...
-   "minikube" stopped.
/tmp/hudson1234576507076593648.sh: line 66: 50258 Killed                  minikube mount ${PVC_TESTS_HOST_PATH}:${PVC_TESTS_VM_PATH} --9p-version=9p2000.L --gid=0 --uid=185
+ [[ 1 = 0 ]]
+ test_status=failure
+ /home/jenkins/bin/post_github_pr_comment.py
Attempting to post to Github...
 > Post successful.
+ rm -rf /tmp/tmp.AMVpB2wQl4
+ exit 1
Build step 'Execute shell' marked build as failure
Archiving artifacts
Test FAILed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/33453/
Test FAILed.
Finished: FAILURE