10:42:38.787 main INFO CoarseGrainedExecutorBackend: Started daemon with process name: 153473@amp-jenkins-worker-04 10:42:38.798 main INFO SignalUtils: Registering signal handler for TERM 10:42:38.800 main INFO SignalUtils: Registering signal handler for HUP 10:42:38.801 main INFO SignalUtils: Registering signal handler for INT 10:42:39.788 main WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 10:42:40.002 main INFO SecurityManager: Changing view acls to: jenkins 10:42:40.003 main INFO SecurityManager: Changing modify acls to: jenkins 10:42:40.004 main INFO SecurityManager: Changing view acls groups to: 10:42:40.005 main INFO SecurityManager: Changing modify acls groups to: 10:42:40.006 main INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(jenkins); groups with view permissions: Set(); users with modify permissions: Set(jenkins); groups with modify permissions: Set() 10:42:41.849 netty-rpc-connection-0 INFO TransportClientFactory: Successfully created connection to amp-jenkins-worker-04.amp/192.168.10.24:41801 after 109 ms (0 ms spent in bootstraps) 10:42:42.061 main INFO SecurityManager: Changing view acls to: jenkins 10:42:42.061 main INFO SecurityManager: Changing modify acls to: jenkins 10:42:42.062 main INFO SecurityManager: Changing view acls groups to: 10:42:42.062 main INFO SecurityManager: Changing modify acls groups to: 10:42:42.062 main INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(jenkins); groups with view permissions: Set(); users with modify permissions: Set(jenkins); groups with modify permissions: Set() 10:42:42.152 netty-rpc-connection-0 INFO TransportClientFactory: Successfully created connection to amp-jenkins-worker-04.amp/192.168.10.24:41801 after 4 ms (0 ms spent in bootstraps) 10:42:42.339 main INFO DiskBlockManager: Created local directory at /home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-b35d5202-1788-428e-88d8-d2d00e91cd96/executor-d14dd2d2-1eaf-4b60-bcc3-03d9a85e07f8/blockmgr-d1e5f4bf-bd1c-42c9-bb79-5f4bd34d099d 10:42:42.405 main INFO MemoryStore: MemoryStore started with capacity 546.3 MiB 10:42:42.685 dispatcher-Executor INFO CoarseGrainedExecutorBackend: Registering PWR handler. 10:42:42.685 dispatcher-Executor INFO SignalUtils: Registering signal handler for PWR 10:42:42.686 dispatcher-Executor INFO CoarseGrainedExecutorBackend: Connecting to driver: spark://CoarseGrainedScheduler@amp-jenkins-worker-04.amp:41801 10:42:42.687 main INFO WorkerWatcher: Connecting to worker spark://Worker@amp-jenkins-worker-04.amp:45883 10:42:42.695 netty-rpc-connection-1 INFO TransportClientFactory: Successfully created connection to amp-jenkins-worker-04.amp/192.168.10.24:45883 after 6 ms (0 ms spent in bootstraps) 10:42:42.699 dispatcher-event-loop-1 INFO WorkerWatcher: Successfully connected to spark://Worker@amp-jenkins-worker-04.amp:45883 10:42:42.705 dispatcher-Executor INFO ResourceUtils: ============================================================== 10:42:42.705 dispatcher-Executor INFO ResourceUtils: No custom resources configured for spark.executor. 10:42:42.705 dispatcher-Executor INFO ResourceUtils: ============================================================== 10:42:42.743 dispatcher-Executor INFO CoarseGrainedExecutorBackend: Successfully registered with driver 10:42:42.747 dispatcher-Executor INFO Executor: Starting executor ID 0 on host amp-jenkins-worker-04.amp 10:42:42.860 dispatcher-Executor INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 36167. 10:42:42.861 dispatcher-Executor INFO NettyBlockTransferService: Server created on amp-jenkins-worker-04.amp:36167 10:42:42.864 dispatcher-Executor INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 10:42:42.875 dispatcher-Executor INFO BlockManagerMaster: Registering BlockManager BlockManagerId(0, amp-jenkins-worker-04.amp, 36167, None) 10:42:42.885 dispatcher-Executor INFO BlockManagerMaster: Registered BlockManager BlockManagerId(0, amp-jenkins-worker-04.amp, 36167, None) 10:42:42.886 dispatcher-Executor INFO BlockManager: Initialized BlockManager: BlockManagerId(0, amp-jenkins-worker-04.amp, 36167, None) 10:42:42.958 dispatcher-Executor INFO CoarseGrainedExecutorBackend: Got assigned task 0 10:42:42.976 Executor task launch worker for task 0 INFO Executor: Running task 0.0 in stage 0.0 (TID 0) 10:42:43.177 Executor task launch worker for task 0 INFO TorrentBroadcast: Started reading broadcast variable 0 with 1 pieces (estimated total size 4.0 MiB) 10:42:43.240 Executor task launch worker for task 0 INFO TransportClientFactory: Successfully created connection to amp-jenkins-worker-04.amp/192.168.10.24:34956 after 5 ms (0 ms spent in bootstraps) 10:42:43.318 Executor task launch worker for task 0 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 59.7 KiB, free 546.2 MiB) 10:42:43.335 Executor task launch worker for task 0 INFO TorrentBroadcast: Reading broadcast variable 0 took 158 ms 10:42:43.726 Executor task launch worker for task 0 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 168.8 KiB, free 546.1 MiB) 10:42:44.247 Executor task launch worker for task 0 INFO FileOutputCommitter: File Output Committer Algorithm version is 2 10:42:44.247 Executor task launch worker for task 0 INFO FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 10:42:44.461 Executor task launch worker for task 0 INFO SQLHadoopMapReduceCommitProtocol: Using user defined output committer class org.apache.spark.sql.execution.datasources.parquet.MarkingFileOutputCommitter 10:42:44.461 Executor task launch worker for task 0 INFO FileOutputCommitter: File Output Committer Algorithm version is 2 10:42:44.461 Executor task launch worker for task 0 INFO FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 10:42:44.463 Executor task launch worker for task 0 INFO SQLHadoopMapReduceCommitProtocol: Using output committer class org.apache.spark.sql.execution.datasources.parquet.MarkingFileOutputCommitter 10:42:44.475 Executor task launch worker for task 0 INFO CodecConfig: Compression: SNAPPY 10:42:44.486 Executor task launch worker for task 0 INFO CodecConfig: Compression: SNAPPY 10:42:44.531 Executor task launch worker for task 0 INFO ParquetOutputFormat: Parquet block size to 134217728 10:42:44.532 Executor task launch worker for task 0 INFO ParquetOutputFormat: Parquet page size to 1048576 10:42:44.532 Executor task launch worker for task 0 INFO ParquetOutputFormat: Parquet dictionary page size to 1048576 10:42:44.532 Executor task launch worker for task 0 INFO ParquetOutputFormat: Dictionary is on 10:42:44.532 Executor task launch worker for task 0 INFO ParquetOutputFormat: Validation is off 10:42:44.532 Executor task launch worker for task 0 INFO ParquetOutputFormat: Writer version is: PARQUET_1_0 10:42:44.532 Executor task launch worker for task 0 INFO ParquetOutputFormat: Maximum row group padding size is 8388608 bytes 10:42:44.532 Executor task launch worker for task 0 INFO ParquetOutputFormat: Page size checking is: estimated 10:42:44.532 Executor task launch worker for task 0 INFO ParquetOutputFormat: Min row count for page size check is: 100 10:42:44.532 Executor task launch worker for task 0 INFO ParquetOutputFormat: Max row count for page size check is: 10000 10:42:44.637 Executor task launch worker for task 0 INFO ParquetWriteSupport: Initialized Parquet WriteSupport with Catalyst schema: { "type" : "struct", "fields" : [ { "name" : "_1", "type" : "integer", "nullable" : false, "metadata" : { } }, { "name" : "_2", "type" : "string", "nullable" : true, "metadata" : { } } ] } and corresponding Parquet message type: message spark_schema { required int32 _1; optional binary _2 (UTF8); } 10:42:44.722 Executor task launch worker for task 0 INFO CodecPool: Got brand-new compressor [.snappy] 10:42:45.203 Executor task launch worker for task 0 INFO InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 17 10:42:45.704 Executor task launch worker for task 0 INFO FileOutputCommitter: Saved output of task 'attempt_20200701104237_0000_m_000000_0' to file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-e5a5a748-d6bf-4dca-be3f-c2343f6c0a9f 10:42:45.706 Executor task launch worker for task 0 INFO SparkHadoopMapRedUtil: attempt_20200701104237_0000_m_000000_0: Committed 10:42:45.734 Executor task launch worker for task 0 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 2198 bytes result sent to driver 10:42:46.595 dispatcher-Executor INFO CoarseGrainedExecutorBackend: Got assigned task 3 10:42:46.596 Executor task launch worker for task 3 INFO Executor: Running task 1.0 in stage 1.0 (TID 3) 10:42:46.620 Executor task launch worker for task 3 INFO TorrentBroadcast: Started reading broadcast variable 1 with 1 pieces (estimated total size 4.0 MiB) 10:42:46.637 Executor task launch worker for task 3 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 59.7 KiB, free 546.0 MiB) 10:42:46.641 Executor task launch worker for task 3 INFO TorrentBroadcast: Reading broadcast variable 1 took 21 ms 10:42:46.654 Executor task launch worker for task 3 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 168.8 KiB, free 545.9 MiB) 10:42:46.683 Executor task launch worker for task 3 INFO FileOutputCommitter: File Output Committer Algorithm version is 2 10:42:46.683 Executor task launch worker for task 3 INFO FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 10:42:46.684 Executor task launch worker for task 3 INFO SQLHadoopMapReduceCommitProtocol: Using user defined output committer class org.apache.spark.sql.execution.datasources.parquet.MarkingFileOutputCommitter 10:42:46.684 Executor task launch worker for task 3 INFO FileOutputCommitter: File Output Committer Algorithm version is 2 10:42:46.684 Executor task launch worker for task 3 INFO FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 10:42:46.685 Executor task launch worker for task 3 INFO SQLHadoopMapReduceCommitProtocol: Using output committer class org.apache.spark.sql.execution.datasources.parquet.MarkingFileOutputCommitter 10:42:46.685 Executor task launch worker for task 3 INFO CodecConfig: Compression: SNAPPY 10:42:46.686 Executor task launch worker for task 3 INFO CodecConfig: Compression: SNAPPY 10:42:46.688 Executor task launch worker for task 3 INFO ParquetOutputFormat: Parquet block size to 134217728 10:42:46.688 Executor task launch worker for task 3 INFO ParquetOutputFormat: Parquet page size to 1048576 10:42:46.688 Executor task launch worker for task 3 INFO ParquetOutputFormat: Parquet dictionary page size to 1048576 10:42:46.688 Executor task launch worker for task 3 INFO ParquetOutputFormat: Dictionary is on 10:42:46.688 Executor task launch worker for task 3 INFO ParquetOutputFormat: Validation is off 10:42:46.688 Executor task launch worker for task 3 INFO ParquetOutputFormat: Writer version is: PARQUET_1_0 10:42:46.688 Executor task launch worker for task 3 INFO ParquetOutputFormat: Maximum row group padding size is 8388608 bytes 10:42:46.688 Executor task launch worker for task 3 INFO ParquetOutputFormat: Page size checking is: estimated 10:42:46.688 Executor task launch worker for task 3 INFO ParquetOutputFormat: Min row count for page size check is: 100 10:42:46.688 Executor task launch worker for task 3 INFO ParquetOutputFormat: Max row count for page size check is: 10000 10:42:46.691 Executor task launch worker for task 3 INFO ParquetWriteSupport: Initialized Parquet WriteSupport with Catalyst schema: { "type" : "struct", "fields" : [ { "name" : "_1", "type" : "integer", "nullable" : false, "metadata" : { } }, { "name" : "_2", "type" : "string", "nullable" : true, "metadata" : { } } ] } and corresponding Parquet message type: message spark_schema { required int32 _1; optional binary _2 (UTF8); } 10:42:46.722 Executor task launch worker for task 3 INFO InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 17 10:42:46.733 Executor task launch worker for task 3 INFO FileOutputCommitter: Saved output of task 'attempt_20200701104246_0001_m_000001_3' to file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-d0aac6cd-4b5e-4165-bd51-f8282921ed49 10:42:46.734 Executor task launch worker for task 3 INFO SparkHadoopMapRedUtil: attempt_20200701104246_0001_m_000001_3: Committed 10:42:46.738 Executor task launch worker for task 3 INFO Executor: Finished task 1.0 in stage 1.0 (TID 3). 2112 bytes result sent to driver 10:42:47.169 dispatcher-Executor INFO CoarseGrainedExecutorBackend: Got assigned task 4 10:42:47.169 Executor task launch worker for task 4 INFO Executor: Running task 0.0 in stage 2.0 (TID 4) 10:42:47.175 Executor task launch worker for task 4 INFO TorrentBroadcast: Started reading broadcast variable 2 with 1 pieces (estimated total size 4.0 MiB) 10:42:47.184 Executor task launch worker for task 4 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 59.6 KiB, free 545.8 MiB) 10:42:47.188 Executor task launch worker for task 4 INFO TorrentBroadcast: Reading broadcast variable 2 took 12 ms 10:42:47.201 Executor task launch worker for task 4 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 168.7 KiB, free 545.6 MiB) 10:42:47.233 Executor task launch worker for task 4 INFO FileOutputCommitter: File Output Committer Algorithm version is 2 10:42:47.234 Executor task launch worker for task 4 INFO FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 10:42:47.235 Executor task launch worker for task 4 INFO SQLHadoopMapReduceCommitProtocol: Using user defined output committer class org.apache.parquet.hadoop.ParquetOutputCommitter 10:42:47.235 Executor task launch worker for task 4 INFO FileOutputCommitter: File Output Committer Algorithm version is 2 10:42:47.235 Executor task launch worker for task 4 INFO FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 10:42:47.236 Executor task launch worker for task 4 INFO SQLHadoopMapReduceCommitProtocol: Using output committer class org.apache.parquet.hadoop.ParquetOutputCommitter 10:42:47.236 Executor task launch worker for task 4 INFO CodecConfig: Compression: SNAPPY 10:42:47.237 Executor task launch worker for task 4 INFO CodecConfig: Compression: SNAPPY 10:42:47.239 Executor task launch worker for task 4 INFO ParquetOutputFormat: Parquet block size to 134217728 10:42:47.239 Executor task launch worker for task 4 INFO ParquetOutputFormat: Parquet page size to 1048576 10:42:47.239 Executor task launch worker for task 4 INFO ParquetOutputFormat: Parquet dictionary page size to 1048576 10:42:47.239 Executor task launch worker for task 4 INFO ParquetOutputFormat: Dictionary is on 10:42:47.239 Executor task launch worker for task 4 INFO ParquetOutputFormat: Validation is off 10:42:47.239 Executor task launch worker for task 4 INFO ParquetOutputFormat: Writer version is: PARQUET_1_0 10:42:47.239 Executor task launch worker for task 4 INFO ParquetOutputFormat: Maximum row group padding size is 8388608 bytes 10:42:47.239 Executor task launch worker for task 4 INFO ParquetOutputFormat: Page size checking is: estimated 10:42:47.239 Executor task launch worker for task 4 INFO ParquetOutputFormat: Min row count for page size check is: 100 10:42:47.239 Executor task launch worker for task 4 INFO ParquetOutputFormat: Max row count for page size check is: 10000 10:42:47.243 Executor task launch worker for task 4 INFO ParquetWriteSupport: Initialized Parquet WriteSupport with Catalyst schema: { "type" : "struct", "fields" : [ { "name" : "_1", "type" : "integer", "nullable" : false, "metadata" : { } }, { "name" : "_2", "type" : "string", "nullable" : true, "metadata" : { } } ] } and corresponding Parquet message type: message spark_schema { required int32 _1; optional binary _2 (UTF8); } 10:42:47.269 Executor task launch worker for task 4 INFO InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 17 10:42:47.283 Executor task launch worker for task 4 INFO FileOutputCommitter: Saved output of task 'attempt_20200701104247_0002_m_000000_4' to file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-4aad7912-f7e7-4616-9544-a27661797594 10:42:47.283 Executor task launch worker for task 4 INFO SparkHadoopMapRedUtil: attempt_20200701104247_0002_m_000000_4: Committed 10:42:47.287 Executor task launch worker for task 4 INFO Executor: Finished task 0.0 in stage 2.0 (TID 4). 2112 bytes result sent to driver 10:42:47.686 dispatcher-Executor INFO CoarseGrainedExecutorBackend: Got assigned task 6 10:42:47.687 Executor task launch worker for task 6 INFO Executor: Running task 0.0 in stage 3.0 (TID 6) 10:42:47.691 Executor task launch worker for task 6 INFO TorrentBroadcast: Started reading broadcast variable 3 with 1 pieces (estimated total size 4.0 MiB) 10:42:47.701 Executor task launch worker for task 6 INFO MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 59.6 KiB, free 545.6 MiB) 10:42:47.704 Executor task launch worker for task 6 INFO TorrentBroadcast: Reading broadcast variable 3 took 13 ms 10:42:47.714 Executor task launch worker for task 6 INFO MemoryStore: Block broadcast_3 stored as values in memory (estimated size 168.7 KiB, free 545.4 MiB) 10:42:47.734 Executor task launch worker for task 6 INFO FileOutputCommitter: File Output Committer Algorithm version is 2 10:42:47.734 Executor task launch worker for task 6 INFO FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 10:42:47.735 Executor task launch worker for task 6 INFO SQLHadoopMapReduceCommitProtocol: Using user defined output committer class org.apache.parquet.hadoop.ParquetOutputCommitter 10:42:47.735 Executor task launch worker for task 6 INFO FileOutputCommitter: File Output Committer Algorithm version is 2 10:42:47.735 Executor task launch worker for task 6 INFO FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 10:42:47.735 Executor task launch worker for task 6 INFO SQLHadoopMapReduceCommitProtocol: Using output committer class org.apache.parquet.hadoop.ParquetOutputCommitter 10:42:47.736 Executor task launch worker for task 6 INFO CodecConfig: Compression: SNAPPY 10:42:47.738 Executor task launch worker for task 6 INFO CodecConfig: Compression: SNAPPY 10:42:47.741 Executor task launch worker for task 6 INFO ParquetOutputFormat: Parquet block size to 134217728 10:42:47.741 Executor task launch worker for task 6 INFO ParquetOutputFormat: Parquet page size to 1048576 10:42:47.741 Executor task launch worker for task 6 INFO ParquetOutputFormat: Parquet dictionary page size to 1048576 10:42:47.741 Executor task launch worker for task 6 INFO ParquetOutputFormat: Dictionary is on 10:42:47.742 Executor task launch worker for task 6 INFO ParquetOutputFormat: Validation is off 10:42:47.742 Executor task launch worker for task 6 INFO ParquetOutputFormat: Writer version is: PARQUET_1_0 10:42:47.742 Executor task launch worker for task 6 INFO ParquetOutputFormat: Maximum row group padding size is 8388608 bytes 10:42:47.742 Executor task launch worker for task 6 INFO ParquetOutputFormat: Page size checking is: estimated 10:42:47.742 Executor task launch worker for task 6 INFO ParquetOutputFormat: Min row count for page size check is: 100 10:42:47.742 Executor task launch worker for task 6 INFO ParquetOutputFormat: Max row count for page size check is: 10000 10:42:47.750 Executor task launch worker for task 6 INFO ParquetWriteSupport: Initialized Parquet WriteSupport with Catalyst schema: { "type" : "struct", "fields" : [ { "name" : "_1", "type" : "integer", "nullable" : false, "metadata" : { } }, { "name" : "_2", "type" : "string", "nullable" : true, "metadata" : { } } ] } and corresponding Parquet message type: message spark_schema { required int32 _1; optional binary _2 (UTF8); } 10:42:47.887 Executor task launch worker for task 6 INFO InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 17 10:42:47.928 Executor task launch worker for task 6 INFO FileOutputCommitter: Saved output of task 'attempt_20200701104247_0003_m_000000_6' to file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-5700679d-fa89-4ddb-8674-8f07e9fe409a 10:42:47.928 Executor task launch worker for task 6 INFO SparkHadoopMapRedUtil: attempt_20200701104247_0003_m_000000_6: Committed 10:42:47.930 Executor task launch worker for task 6 INFO Executor: Finished task 0.0 in stage 3.0 (TID 6). 2112 bytes result sent to driver 10:42:47.978 dispatcher-Executor INFO CoarseGrainedExecutorBackend: Driver commanded a shutdown 10:42:48.096 SIGTERM handler ERROR CoarseGrainedExecutorBackend: RECEIVED SIGNAL TERM 10:42:48.100 shutdown-hook-0 INFO ShutdownHookManager: Shutdown hook called 10:42:48.103 dispatcher-event-loop-1 ERROR WorkerWatcher: Lost connection to worker rpc endpoint spark://Worker@amp-jenkins-worker-04.amp:45883. Exiting. 10:42:48.103 dispatcher-Executor INFO CoarseGrainedExecutorBackend: Driver from amp-jenkins-worker-04.amp:45883 disconnected during shutdown 10:42:48.111 CoarseGrainedExecutorBackend-stop-executor INFO MemoryStore: MemoryStore cleared 10:42:48.111 CoarseGrainedExecutorBackend-stop-executor INFO BlockManager: BlockManager stopped