10:42:38.655 main INFO CoarseGrainedExecutorBackend: Started daemon with process name: 153472@amp-jenkins-worker-04 10:42:38.665 main INFO SignalUtils: Registering signal handler for TERM 10:42:38.667 main INFO SignalUtils: Registering signal handler for HUP 10:42:38.667 main INFO SignalUtils: Registering signal handler for INT 10:42:40.180 main WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 10:42:40.375 main INFO SecurityManager: Changing view acls to: jenkins 10:42:40.376 main INFO SecurityManager: Changing modify acls to: jenkins 10:42:40.376 main INFO SecurityManager: Changing view acls groups to: 10:42:40.377 main INFO SecurityManager: Changing modify acls groups to: 10:42:40.378 main INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(jenkins); groups with view permissions: Set(); users with modify permissions: Set(jenkins); groups with modify permissions: Set() 10:42:42.098 netty-rpc-connection-0 INFO TransportClientFactory: Successfully created connection to amp-jenkins-worker-04.amp/192.168.10.24:41801 after 88 ms (0 ms spent in bootstraps) 10:42:42.435 main INFO SecurityManager: Changing view acls to: jenkins 10:42:42.436 main INFO SecurityManager: Changing modify acls to: jenkins 10:42:42.436 main INFO SecurityManager: Changing view acls groups to: 10:42:42.436 main INFO SecurityManager: Changing modify acls groups to: 10:42:42.437 main INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(jenkins); groups with view permissions: Set(); users with modify permissions: Set(jenkins); groups with modify permissions: Set() 10:42:42.535 netty-rpc-connection-0 INFO TransportClientFactory: Successfully created connection to amp-jenkins-worker-04.amp/192.168.10.24:41801 after 7 ms (0 ms spent in bootstraps) 10:42:42.611 main INFO DiskBlockManager: Created local directory at /home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-b35d5202-1788-428e-88d8-d2d00e91cd96/executor-3abad7de-f701-4626-a5fb-eb370217170d/blockmgr-e211c3ac-6f3b-488d-b24d-cc9db7b24628 10:42:42.665 main INFO MemoryStore: MemoryStore started with capacity 546.3 MiB 10:42:42.895 dispatcher-Executor INFO CoarseGrainedExecutorBackend: Registering PWR handler. 10:42:42.896 dispatcher-Executor INFO SignalUtils: Registering signal handler for PWR 10:42:42.897 main INFO WorkerWatcher: Connecting to worker spark://Worker@amp-jenkins-worker-04.amp:41792 10:42:42.897 dispatcher-Executor INFO CoarseGrainedExecutorBackend: Connecting to driver: spark://CoarseGrainedScheduler@amp-jenkins-worker-04.amp:41801 10:42:42.909 netty-rpc-connection-1 INFO TransportClientFactory: Successfully created connection to amp-jenkins-worker-04.amp/192.168.10.24:41792 after 10 ms (0 ms spent in bootstraps) 10:42:42.916 dispatcher-event-loop-1 INFO WorkerWatcher: Successfully connected to spark://Worker@amp-jenkins-worker-04.amp:41792 10:42:42.920 dispatcher-Executor INFO ResourceUtils: ============================================================== 10:42:42.920 dispatcher-Executor INFO ResourceUtils: No custom resources configured for spark.executor. 10:42:42.921 dispatcher-Executor INFO ResourceUtils: ============================================================== 10:42:42.952 dispatcher-Executor INFO CoarseGrainedExecutorBackend: Successfully registered with driver 10:42:42.956 dispatcher-Executor INFO Executor: Starting executor ID 1 on host amp-jenkins-worker-04.amp 10:42:43.073 dispatcher-Executor INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 46210. 10:42:43.074 dispatcher-Executor INFO NettyBlockTransferService: Server created on amp-jenkins-worker-04.amp:46210 10:42:43.077 dispatcher-Executor INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 10:42:43.085 dispatcher-Executor INFO BlockManagerMaster: Registering BlockManager BlockManagerId(1, amp-jenkins-worker-04.amp, 46210, None) 10:42:43.095 dispatcher-Executor INFO BlockManagerMaster: Registered BlockManager BlockManagerId(1, amp-jenkins-worker-04.amp, 46210, None) 10:42:43.096 dispatcher-Executor INFO BlockManager: Initialized BlockManager: BlockManagerId(1, amp-jenkins-worker-04.amp, 46210, None) 10:42:43.150 dispatcher-Executor INFO CoarseGrainedExecutorBackend: Got assigned task 1 10:42:43.165 Executor task launch worker for task 1 INFO Executor: Running task 1.0 in stage 0.0 (TID 1) 10:42:43.358 Executor task launch worker for task 1 INFO TorrentBroadcast: Started reading broadcast variable 0 with 1 pieces (estimated total size 4.0 MiB) 10:42:43.411 Executor task launch worker for task 1 INFO TransportClientFactory: Successfully created connection to amp-jenkins-worker-04.amp/192.168.10.24:34956 after 3 ms (0 ms spent in bootstraps) 10:42:43.446 Executor task launch worker for task 1 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 59.7 KiB, free 546.2 MiB) 10:42:43.459 Executor task launch worker for task 1 INFO TorrentBroadcast: Reading broadcast variable 0 took 100 ms 10:42:43.859 Executor task launch worker for task 1 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 168.8 KiB, free 546.1 MiB) 10:42:44.341 Executor task launch worker for task 1 INFO FileOutputCommitter: File Output Committer Algorithm version is 2 10:42:44.342 Executor task launch worker for task 1 INFO FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 10:42:44.549 Executor task launch worker for task 1 INFO SQLHadoopMapReduceCommitProtocol: Using user defined output committer class org.apache.spark.sql.execution.datasources.parquet.MarkingFileOutputCommitter 10:42:44.549 Executor task launch worker for task 1 INFO FileOutputCommitter: File Output Committer Algorithm version is 2 10:42:44.549 Executor task launch worker for task 1 INFO FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 10:42:44.551 Executor task launch worker for task 1 INFO SQLHadoopMapReduceCommitProtocol: Using output committer class org.apache.spark.sql.execution.datasources.parquet.MarkingFileOutputCommitter 10:42:44.562 Executor task launch worker for task 1 INFO CodecConfig: Compression: SNAPPY 10:42:44.572 Executor task launch worker for task 1 INFO CodecConfig: Compression: SNAPPY 10:42:44.606 Executor task launch worker for task 1 INFO ParquetOutputFormat: Parquet block size to 134217728 10:42:44.607 Executor task launch worker for task 1 INFO ParquetOutputFormat: Parquet page size to 1048576 10:42:44.607 Executor task launch worker for task 1 INFO ParquetOutputFormat: Parquet dictionary page size to 1048576 10:42:44.607 Executor task launch worker for task 1 INFO ParquetOutputFormat: Dictionary is on 10:42:44.607 Executor task launch worker for task 1 INFO ParquetOutputFormat: Validation is off 10:42:44.607 Executor task launch worker for task 1 INFO ParquetOutputFormat: Writer version is: PARQUET_1_0 10:42:44.607 Executor task launch worker for task 1 INFO ParquetOutputFormat: Maximum row group padding size is 8388608 bytes 10:42:44.607 Executor task launch worker for task 1 INFO ParquetOutputFormat: Page size checking is: estimated 10:42:44.607 Executor task launch worker for task 1 INFO ParquetOutputFormat: Min row count for page size check is: 100 10:42:44.607 Executor task launch worker for task 1 INFO ParquetOutputFormat: Max row count for page size check is: 10000 10:42:44.719 Executor task launch worker for task 1 INFO ParquetWriteSupport: Initialized Parquet WriteSupport with Catalyst schema: { "type" : "struct", "fields" : [ { "name" : "_1", "type" : "integer", "nullable" : false, "metadata" : { } }, { "name" : "_2", "type" : "string", "nullable" : true, "metadata" : { } } ] } and corresponding Parquet message type: message spark_schema { required int32 _1; optional binary _2 (UTF8); } 10:42:44.798 Executor task launch worker for task 1 INFO CodecPool: Got brand-new compressor [.snappy] 10:42:45.725 Executor task launch worker for task 1 INFO InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 17 10:42:46.405 Executor task launch worker for task 1 INFO FileOutputCommitter: Saved output of task 'attempt_20200701104237_0000_m_000001_1' to file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-e5a5a748-d6bf-4dca-be3f-c2343f6c0a9f 10:42:46.406 Executor task launch worker for task 1 INFO SparkHadoopMapRedUtil: attempt_20200701104237_0000_m_000001_1: Committed 10:42:46.433 Executor task launch worker for task 1 INFO Executor: Finished task 1.0 in stage 0.0 (TID 1). 2198 bytes result sent to driver 10:42:46.594 dispatcher-Executor INFO CoarseGrainedExecutorBackend: Got assigned task 2 10:42:46.595 Executor task launch worker for task 2 INFO Executor: Running task 0.0 in stage 1.0 (TID 2) 10:42:46.633 Executor task launch worker for task 2 INFO TorrentBroadcast: Started reading broadcast variable 1 with 1 pieces (estimated total size 4.0 MiB) 10:42:46.646 Executor task launch worker for task 2 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 59.7 KiB, free 546.0 MiB) 10:42:46.651 Executor task launch worker for task 2 INFO TorrentBroadcast: Reading broadcast variable 1 took 17 ms 10:42:46.663 Executor task launch worker for task 2 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 168.8 KiB, free 545.9 MiB) 10:42:46.785 Executor task launch worker for task 2 INFO FileOutputCommitter: File Output Committer Algorithm version is 2 10:42:46.785 Executor task launch worker for task 2 INFO FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 10:42:46.786 Executor task launch worker for task 2 INFO SQLHadoopMapReduceCommitProtocol: Using user defined output committer class org.apache.spark.sql.execution.datasources.parquet.MarkingFileOutputCommitter 10:42:46.787 Executor task launch worker for task 2 INFO FileOutputCommitter: File Output Committer Algorithm version is 2 10:42:46.787 Executor task launch worker for task 2 INFO FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 10:42:46.788 Executor task launch worker for task 2 INFO SQLHadoopMapReduceCommitProtocol: Using output committer class org.apache.spark.sql.execution.datasources.parquet.MarkingFileOutputCommitter 10:42:46.788 Executor task launch worker for task 2 INFO CodecConfig: Compression: SNAPPY 10:42:46.789 Executor task launch worker for task 2 INFO CodecConfig: Compression: SNAPPY 10:42:46.790 Executor task launch worker for task 2 INFO ParquetOutputFormat: Parquet block size to 134217728 10:42:46.790 Executor task launch worker for task 2 INFO ParquetOutputFormat: Parquet page size to 1048576 10:42:46.790 Executor task launch worker for task 2 INFO ParquetOutputFormat: Parquet dictionary page size to 1048576 10:42:46.790 Executor task launch worker for task 2 INFO ParquetOutputFormat: Dictionary is on 10:42:46.790 Executor task launch worker for task 2 INFO ParquetOutputFormat: Validation is off 10:42:46.790 Executor task launch worker for task 2 INFO ParquetOutputFormat: Writer version is: PARQUET_1_0 10:42:46.790 Executor task launch worker for task 2 INFO ParquetOutputFormat: Maximum row group padding size is 8388608 bytes 10:42:46.790 Executor task launch worker for task 2 INFO ParquetOutputFormat: Page size checking is: estimated 10:42:46.790 Executor task launch worker for task 2 INFO ParquetOutputFormat: Min row count for page size check is: 100 10:42:46.791 Executor task launch worker for task 2 INFO ParquetOutputFormat: Max row count for page size check is: 10000 10:42:46.794 Executor task launch worker for task 2 INFO ParquetWriteSupport: Initialized Parquet WriteSupport with Catalyst schema: { "type" : "struct", "fields" : [ { "name" : "_1", "type" : "integer", "nullable" : false, "metadata" : { } }, { "name" : "_2", "type" : "string", "nullable" : true, "metadata" : { } } ] } and corresponding Parquet message type: message spark_schema { required int32 _1; optional binary _2 (UTF8); } 10:42:46.835 Executor task launch worker for task 2 INFO InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 17 10:42:46.844 Executor task launch worker for task 2 INFO FileOutputCommitter: Saved output of task 'attempt_20200701104246_0001_m_000000_2' to file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-d0aac6cd-4b5e-4165-bd51-f8282921ed49 10:42:46.844 Executor task launch worker for task 2 INFO SparkHadoopMapRedUtil: attempt_20200701104246_0001_m_000000_2: Committed 10:42:46.847 Executor task launch worker for task 2 INFO Executor: Finished task 0.0 in stage 1.0 (TID 2). 2112 bytes result sent to driver 10:42:47.169 dispatcher-Executor INFO CoarseGrainedExecutorBackend: Got assigned task 5 10:42:47.169 Executor task launch worker for task 5 INFO Executor: Running task 1.0 in stage 2.0 (TID 5) 10:42:47.175 Executor task launch worker for task 5 INFO TorrentBroadcast: Started reading broadcast variable 2 with 1 pieces (estimated total size 4.0 MiB) 10:42:47.184 Executor task launch worker for task 5 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 59.6 KiB, free 545.8 MiB) 10:42:47.187 Executor task launch worker for task 5 INFO TorrentBroadcast: Reading broadcast variable 2 took 12 ms 10:42:47.201 Executor task launch worker for task 5 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 168.7 KiB, free 545.6 MiB) 10:42:47.366 Executor task launch worker for task 5 INFO FileOutputCommitter: File Output Committer Algorithm version is 2 10:42:47.366 Executor task launch worker for task 5 INFO FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 10:42:47.367 Executor task launch worker for task 5 INFO SQLHadoopMapReduceCommitProtocol: Using user defined output committer class org.apache.parquet.hadoop.ParquetOutputCommitter 10:42:47.367 Executor task launch worker for task 5 INFO FileOutputCommitter: File Output Committer Algorithm version is 2 10:42:47.367 Executor task launch worker for task 5 INFO FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 10:42:47.367 Executor task launch worker for task 5 INFO SQLHadoopMapReduceCommitProtocol: Using output committer class org.apache.parquet.hadoop.ParquetOutputCommitter 10:42:47.368 Executor task launch worker for task 5 INFO CodecConfig: Compression: SNAPPY 10:42:47.368 Executor task launch worker for task 5 INFO CodecConfig: Compression: SNAPPY 10:42:47.369 Executor task launch worker for task 5 INFO ParquetOutputFormat: Parquet block size to 134217728 10:42:47.370 Executor task launch worker for task 5 INFO ParquetOutputFormat: Parquet page size to 1048576 10:42:47.370 Executor task launch worker for task 5 INFO ParquetOutputFormat: Parquet dictionary page size to 1048576 10:42:47.370 Executor task launch worker for task 5 INFO ParquetOutputFormat: Dictionary is on 10:42:47.370 Executor task launch worker for task 5 INFO ParquetOutputFormat: Validation is off 10:42:47.370 Executor task launch worker for task 5 INFO ParquetOutputFormat: Writer version is: PARQUET_1_0 10:42:47.370 Executor task launch worker for task 5 INFO ParquetOutputFormat: Maximum row group padding size is 8388608 bytes 10:42:47.370 Executor task launch worker for task 5 INFO ParquetOutputFormat: Page size checking is: estimated 10:42:47.370 Executor task launch worker for task 5 INFO ParquetOutputFormat: Min row count for page size check is: 100 10:42:47.370 Executor task launch worker for task 5 INFO ParquetOutputFormat: Max row count for page size check is: 10000 10:42:47.372 Executor task launch worker for task 5 INFO ParquetWriteSupport: Initialized Parquet WriteSupport with Catalyst schema: { "type" : "struct", "fields" : [ { "name" : "_1", "type" : "integer", "nullable" : false, "metadata" : { } }, { "name" : "_2", "type" : "string", "nullable" : true, "metadata" : { } } ] } and corresponding Parquet message type: message spark_schema { required int32 _1; optional binary _2 (UTF8); } 10:42:47.391 Executor task launch worker for task 5 INFO InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 17 10:42:47.401 Executor task launch worker for task 5 INFO FileOutputCommitter: Saved output of task 'attempt_20200701104247_0002_m_000001_5' to file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-4aad7912-f7e7-4616-9544-a27661797594 10:42:47.402 Executor task launch worker for task 5 INFO SparkHadoopMapRedUtil: attempt_20200701104247_0002_m_000001_5: Committed 10:42:47.406 Executor task launch worker for task 5 INFO Executor: Finished task 1.0 in stage 2.0 (TID 5). 2112 bytes result sent to driver 10:42:47.686 dispatcher-Executor INFO CoarseGrainedExecutorBackend: Got assigned task 7 10:42:47.687 Executor task launch worker for task 7 INFO Executor: Running task 1.0 in stage 3.0 (TID 7) 10:42:47.691 Executor task launch worker for task 7 INFO TorrentBroadcast: Started reading broadcast variable 3 with 1 pieces (estimated total size 4.0 MiB) 10:42:47.701 Executor task launch worker for task 7 INFO MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 59.6 KiB, free 545.6 MiB) 10:42:47.705 Executor task launch worker for task 7 INFO TorrentBroadcast: Reading broadcast variable 3 took 13 ms 10:42:47.716 Executor task launch worker for task 7 INFO MemoryStore: Block broadcast_3 stored as values in memory (estimated size 168.7 KiB, free 545.4 MiB) 10:42:47.745 Executor task launch worker for task 7 INFO FileOutputCommitter: File Output Committer Algorithm version is 2 10:42:47.745 Executor task launch worker for task 7 INFO FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 10:42:47.746 Executor task launch worker for task 7 INFO SQLHadoopMapReduceCommitProtocol: Using user defined output committer class org.apache.parquet.hadoop.ParquetOutputCommitter 10:42:47.746 Executor task launch worker for task 7 INFO FileOutputCommitter: File Output Committer Algorithm version is 2 10:42:47.746 Executor task launch worker for task 7 INFO FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 10:42:47.747 Executor task launch worker for task 7 INFO SQLHadoopMapReduceCommitProtocol: Using output committer class org.apache.parquet.hadoop.ParquetOutputCommitter 10:42:47.747 Executor task launch worker for task 7 INFO CodecConfig: Compression: SNAPPY 10:42:47.758 Executor task launch worker for task 7 INFO CodecConfig: Compression: SNAPPY 10:42:47.759 Executor task launch worker for task 7 INFO ParquetOutputFormat: Parquet block size to 134217728 10:42:47.759 Executor task launch worker for task 7 INFO ParquetOutputFormat: Parquet page size to 1048576 10:42:47.760 Executor task launch worker for task 7 INFO ParquetOutputFormat: Parquet dictionary page size to 1048576 10:42:47.760 Executor task launch worker for task 7 INFO ParquetOutputFormat: Dictionary is on 10:42:47.760 Executor task launch worker for task 7 INFO ParquetOutputFormat: Validation is off 10:42:47.760 Executor task launch worker for task 7 INFO ParquetOutputFormat: Writer version is: PARQUET_1_0 10:42:47.760 Executor task launch worker for task 7 INFO ParquetOutputFormat: Maximum row group padding size is 8388608 bytes 10:42:47.760 Executor task launch worker for task 7 INFO ParquetOutputFormat: Page size checking is: estimated 10:42:47.760 Executor task launch worker for task 7 INFO ParquetOutputFormat: Min row count for page size check is: 100 10:42:47.760 Executor task launch worker for task 7 INFO ParquetOutputFormat: Max row count for page size check is: 10000 10:42:47.763 Executor task launch worker for task 7 INFO ParquetWriteSupport: Initialized Parquet WriteSupport with Catalyst schema: { "type" : "struct", "fields" : [ { "name" : "_1", "type" : "integer", "nullable" : false, "metadata" : { } }, { "name" : "_2", "type" : "string", "nullable" : true, "metadata" : { } } ] } and corresponding Parquet message type: message spark_schema { required int32 _1; optional binary _2 (UTF8); } 10:42:47.930 Executor task launch worker for task 7 INFO InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 17 10:42:47.936 Executor task launch worker for task 7 INFO FileOutputCommitter: Saved output of task 'attempt_20200701104247_0003_m_000001_7' to file:/home/jenkins/workspace/NewSparkPullRequestBuilder/target/tmp/spark-5700679d-fa89-4ddb-8674-8f07e9fe409a 10:42:47.936 Executor task launch worker for task 7 INFO SparkHadoopMapRedUtil: attempt_20200701104247_0003_m_000001_7: Committed 10:42:47.938 Executor task launch worker for task 7 INFO Executor: Finished task 1.0 in stage 3.0 (TID 7). 2112 bytes result sent to driver 10:42:47.988 dispatcher-Executor INFO CoarseGrainedExecutorBackend: Driver commanded a shutdown 10:42:48.098 SIGTERM handler ERROR CoarseGrainedExecutorBackend: RECEIVED SIGNAL TERM 10:42:48.098 dispatcher-event-loop-1 ERROR WorkerWatcher: Lost connection to worker rpc endpoint spark://Worker@amp-jenkins-worker-04.amp:41792. Exiting. 10:42:48.098 dispatcher-Executor INFO CoarseGrainedExecutorBackend: Driver from amp-jenkins-worker-04.amp:41792 disconnected during shutdown 10:42:48.103 shutdown-hook-0 INFO DiskBlockManager: Shutdown hook called 10:42:48.115 shutdown-hook-0 INFO ShutdownHookManager: Shutdown hook called 10:42:48.119 CoarseGrainedExecutorBackend-stop-executor INFO MemoryStore: MemoryStore cleared 10:42:48.120 CoarseGrainedExecutorBackend-stop-executor INFO BlockManager: BlockManager stopped