FailedConsole Output

Skipping 1,229 KB.. Full Log
y+/JNU2Yj/Tagmf50wMjD4M7CWJ6SCJEgYhn6zEskT9nMS8dP3gkqLMvHTriiIGKaihyfl5xfk5qXrOEBpkDgMEMDIxMFQUlDDI2RQXJOYpFJdU5qTaKoEttlJQNjBwdjEwsFayAwAsE8VZpQAAAA==- filter RDD bound alignments to sample
- filter dataset bound alignments to sample
- filter RDD bound alignments to samples
- filter dataset bound alignments to samples
- sort by read name
- transform dataset via java API
- convert alignments to reads
SmithWatermanSuite:
- gather max position from simple scoring matrix
- gather max position from irregular scoring matrix
- gather max position from irregular scoring matrix with deletions
- score simple alignment with constant gap
- score irregular scoring matrix
- score irregular scoring matrix with indel
- can unroll cigars correctly
- execute simple trackback
- execute trackback with indel
- run end to end smith waterman for simple reads
- run end to end smith waterman for short sequences with indel
- run end to end smith waterman for longer sequences with snp
- run end to end smith waterman for longer sequences with short indel
- run end to end smith waterman for shorter sequence in longer sequence
- run end to end smith waterman for shorter sequence in longer sequence, with indel
- smithWaterman - simple alignment
MdTagSuite:
- null md tag
- zero length md tag
- md tag with non-digit initial value
- md tag invalid base
- md tag, pure insertion
- md tag, pure insertion, test 2
- md tag pure insertion equality
- md tag equality and hashcode
- valid md tags
- get start of read with no mismatches or deletions
- get start of read with no mismatches, but with a deletion at the start
- get start of read with mismatches at the start
- get end of read with no mismatches or deletions
- check that mdtag and rich record return same end
- get end of read with no mismatches, but a deletion at end
- CIGAR with N operator
- CIGAR with multiple N operators
- CIGAR with P operators
- Get correct matches for mdtag with insertion
- Get correct matches for mdtag with mismatches and insertion
- Get correct matches for mdtag with insertion between mismatches
- Get correct matches for mdtag with intron between mismatches
- Get correct matches for mdtag with intron and deletion between mismatches
- Throw exception when number of deleted bases in mdtag disagrees with CIGAR
- Get correct matches for mdtag with mismatch, insertion and deletion
- Get correct matches for mdtag with mismatches, insertion and deletion
- Get correct matches for MDTag with mismatches and deletions
- Get correct matches base from MDTag and CIGAR with N
- get end of read with mismatches and a deletion at end
- get correct string out of mdtag with no mismatches
- get correct string out of mdtag with mismatches at start
- get correct string out of mdtag with deletion at end
- get correct string out of mdtag with mismatches at end
- get correct string out of complex mdtag
- check complex mdtag
- get gapped reference
- move a cigar alignment by two for a read
- rewrite alignment to all matches
- rewrite alignment to two mismatches followed by all matches
- rewrite alignment to include a deletion but otherwise all matches
- rewrite alignment to include an insertion at the start of the read but otherwise all matches
- create new md tag from read vs. reference, perfect match
- create new md tag from read vs. reference, perfect alignment match, 1 mismatch
- create new md tag from read vs. reference, alignment with deletion
- create new md tag from read vs. reference, alignment with insert
- handle '=' and 'X' operators
- CIGAR/MD tag mismatch should cause errors
GenomicDatasetSuite:
- processing a command that is the spark root directory should return an absolute path
- processing a command that is just a single word should do nothing
- processing a command should handle arguments that include spaces
- processing a command that is a single substitution should succeed
- processing a command that is multiple words should split the string
- process a command that is multiple words with a replacement
- process a command that is multiple words with multiple replacements
ParallelFileMergerSuite:
- cannot write both empty gzip block and cram eof
- buffer size must be non-negative
- get the size of several files
- block size must be positive and non-zero when trying to merge files
- must provide files to merge
- if two files are both below the block size, they should merge into one shard
- merge two files where one is greater than the block size
- merge a sharded sam file
- merge a sharded bam file
- merge a sharded cram file
- can't turn a negative index into a path
- generate a path from an index
IndelTableSuite:
- check for indels in a region with known indels
- check for indels in a contig that doesn't exist
- check for indels in a region without known indels
- build indel table from rdd of variants
SnpTableSuite:
- create an empty snp table
- create a snp table from variants on multiple contigs
- create a snp table from a larger set of variants
- perform lookups on multi-contig snp table
- perform lookups on larger snp table
RealignIndelsSuite:
- map reads to targets
- checking mapping to targets for artificial reads
- checking alternative consensus for artificial reads
- checking extraction of reference from reads
- checking realigned reads for artificial input
- checking realigned reads for artificial input with reference file
- checking realigned reads for artificial input using knowns
- checking realigned reads for artificial input using knowns and reads
- skip realigning reads if target is highly covered
- skip realignment if target is an insufficient LOD improvement
- realign reads to an insertion
- test mismatch quality scoring
- test mismatch quality scoring for no mismatches
- test mismatch quality scoring for offset
- test mismatch quality scoring with early exit
- test mismatch quality scoring after unpacking read
- we shouldn't try to realign a region with no target
- we shouldn't try to realign reads with no indel evidence
- test OP and OC tags
- realign a read with an insertion that goes off the end of the read
- if realigning a target doesn't improve the LOD, don't drop reads
- extract seq/qual from a read with no clipped bases
- extract seq/qual from a read with clipped bases at start
- extract seq/qual from a read with clipped bases at end
- if unclip is selected, don't drop base when extracting from a read with clipped bases
- get cigar and coordinates for read that spans indel, no clipped bases
- get cigar and coordinates for read that spans deletion, clipped bases at start
- get cigar and coordinates for read that falls wholly before insertion
- get cigar and coordinates for read that falls wholly after insertion
- get cigar and coordinates for read that falls wholly after deletion
- get cigar and coordinates for read that partially spans insertion, no clipped bases
- get cigar and coordinates for read that partially spans insertion, clipped bases at end
- get cigar and coordinates for read that partially spans insertion, clipped bases both ends
BaseQualityRecalibrationSuite:
- BQSR Test Input #1 w/ VCF Sites without caching
- BQSR Test Input #1 w/ VCF Sites with caching
- BQSR Test Input #1 w/ VCF Sites with serialized caching
DinucCovariateSuite:
- computing dinucleotide pairs for a single base sequence should return (N,N)
- compute dinucleotide pairs for a string of all valid bases
- compute dinucleotide pairs for a string with an N
- compute covariates for a read on the negative strand
- compute covariates for a read on the positive strand
SequenceDictionarySuite:
- Convert from sam sequence record and back
- Convert from SAM sequence dictionary file (with extra fields)
- merge into existing dictionary
- Convert from SAM sequence dictionary and back
- Can retrieve sequence by name
- SequenceDictionary's with same single element are equal
- SequenceDictionary's with same two elements are equals
- SequenceDictionary's with different elements are unequal
- SequenceDictionaries with same elements in different order are compatible
- isCompatible tests equality on overlap
- The addition + works correctly
- The append operation ++ works correctly
- ContainsRefName works correctly for different string types
- Apply on name works correctly for different String types
- convert from sam sequence record and back
- convert from sam sequence dictionary and back
- conversion to sam sequence dictionary has correct sort order
- load sequence dictionary from VCF file
- empty sequence dictionary must be empty
- test filter to reference name
- test filter to reference names
- test filter to reference name by function
GenomicPositionPartitionerSuite:
- partitions the UNMAPPED ReferencePosition into the top partition
- if we do not have a contig for a record, we throw an IAE
- partitioning into N pieces on M total sequence length, where N > M, results in M partitions
- correctly partitions a single dummy sequence into two pieces
- correctly counts cumulative lengths
- correctly partitions positions across two dummy sequences
- test that we can range partition ADAMRecords
- test that we can range partition ADAMRecords indexed by sample
- test that simple partitioning works okay on a reasonable set of ADAMRecords
- test indexed ReferencePosition partitioning works on a set of indexed ADAMRecords
CoverageSuite:
- Convert to coverage from valid Feature
- Convert to coverage from valid Feature with sampleId
- Convert to coverage from Feature with null/empty contigName fails with correct error
- Convert to coverage from Feature with no start/end position fails with correct error
- Convert to coverage from Feature with no score fails with correct error
InnerTreeRegionJoinSuite:
- Ensure same reference regions get passed together
- Overlapping reference regions
- Multiple reference regions do not throw exception
VariantDatasetSuite:
- union two variant genomic datasets together
- round trip to parquet
- save and reload from partitioned parquet
- use broadcast join to pull down variants mapped to targets
- use right outer broadcast join to pull down variants mapped to targets
- use shuffle join to pull down variants mapped to targets
- use right outer shuffle join to pull down variants mapped to targets
- use left outer shuffle join to pull down variants mapped to targets
- use full outer shuffle join to pull down variants mapped to targets
- use shuffle join with group by to pull down variants mapped to targets
- use right outer shuffle join with group by to pull down variants mapped to targets
- convert back to variant contexts
- load parquet to sql, save, re-read from avro
2020-01-24 12:13:51 WARN  DatasetBoundSliceDataset:190 - Saving directly as Parquet from SQL. Options other than compression codec are ignored.
- transform variants to slice genomic dataset
- transform variants to coverage genomic dataset
- transform variants to feature genomic dataset
- transform variants to fragment genomic dataset
- transform variants to read genomic dataset
- transform variants to genotype genomic dataset
- transform variants to variant context genomic dataset
- filter RDD bound variants to filters passed
- filter dataset bound variants to filters passed
- filter RDD bound variants by quality
- filter dataset bound variants by quality
- filter RDD bound variants by read depth
- filter dataset bound variants by read depth
- filter RDD bound variants by reference read depth
- filter dataset bound variants by reference read depth
- filter RDD bound single nucleotide variants
- filter dataset bound single nucleotide variants
- filter RDD bound multiple nucleotide variants
- filter dataset bound multiple nucleotide variants
- filter RDD bound indel variants
- filter dataset bound indel variants
- filter RDD bound variants to single nucleotide variants
- filter dataset bound variants to single nucleotide variants
- filter RDD bound variants to multiple nucleotide variants
- filter dataset bound variants to multiple nucleotide variants
- filter RDD bound variants to indel variants
- filter dataset bound variants to indel variants
- transform dataset via java API
Run completed in 6 minutes, 36 seconds.
Total number of tests run: 1175
Suites: completed 68, aborted 0
Tests: succeeded 1175, failed 0, canceled 0, ignored 5, pending 0
All tests passed.
[INFO] 
[INFO] <<< scoverage-maven-plugin:1.1.1:report (default-cli) < [scoverage]test @ adam-core-spark2_2.11 <<<
[INFO] 
[INFO] --- scoverage-maven-plugin:1.1.1:report (default-cli) @ adam-core-spark2_2.11 ---
[INFO] [scoverage] Generating cobertura XML report...
[INFO] [scoverage] Generating scoverage XML report...
[INFO] [scoverage] Generating scoverage HTML report...
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building ADAM_2.11: APIs for Java, Python 0.30.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-enforcer-plugin:1.0:enforce (enforce-versions) @ adam-apis-spark2_2.11 ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.0:enforce (enforce-maven) @ adam-apis-spark2_2.11 ---
[INFO] 
[INFO] --- build-helper-maven-plugin:3.0.0:add-source (add-source) @ adam-apis-spark2_2.11 ---
[INFO] Source directory: /home/jenkins/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALAVER/2.11/SPARK_VERSION/2.4.4/label/ubuntu/adam-apis/src/main/scala added.
[INFO] 
[INFO] --- scalariform-maven-plugin:0.1.4:format (default-cli) @ adam-apis-spark2_2.11 ---
[INFO] Modified 0 of 5 .scala files
[INFO] 
[INFO] --- maven-resources-plugin:3.1.0:resources (default-resources) @ adam-apis-spark2_2.11 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/jenkins/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALAVER/2.11/SPARK_VERSION/2.4.4/label/ubuntu/adam-apis/src/main/resources
[INFO] 
[INFO] --- scala-maven-plugin:3.2.2:compile (scala-compile-first) @ adam-apis-spark2_2.11 ---
[INFO] /home/jenkins/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALAVER/2.11/SPARK_VERSION/2.4.4/label/ubuntu/adam-apis/src/main/scala:-1: info: compiling
[INFO] Compiling 4 source files to /home/jenkins/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALAVER/2.11/SPARK_VERSION/2.4.4/label/ubuntu/adam-apis/target/2.11.12/classes at 1579896853627
[WARNING] warning: there were two feature warnings; re-run with -feature for details
[WARNING] one warning found
[INFO] prepare-compile in 0 s
[INFO] compile in 5 s
[INFO] 
[INFO] --- maven-compiler-plugin:3.8.0:compile (default-compile) @ adam-apis-spark2_2.11 ---
[INFO] Nothing to compile - all classes are up to date
[INFO] 
[INFO] --- build-helper-maven-plugin:3.0.0:add-test-source (add-test-source) @ adam-apis-spark2_2.11 ---
[INFO] Test Source directory: /home/jenkins/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALAVER/2.11/SPARK_VERSION/2.4.4/label/ubuntu/adam-apis/src/test/scala added.
[INFO] 
[INFO] --- maven-resources-plugin:3.1.0:testResources (default-testResources) @ adam-apis-spark2_2.11 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 2 resources
[INFO] 
[INFO] --- scala-maven-plugin:3.2.2:testCompile (scala-test-compile-first) @ adam-apis-spark2_2.11 ---
[INFO] /home/jenkins/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALAVER/2.11/SPARK_VERSION/2.4.4/label/ubuntu/adam-apis/src/test/java:-1: info: compiling
[INFO] /home/jenkins/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALAVER/2.11/SPARK_VERSION/2.4.4/label/ubuntu/adam-apis/src/test/scala:-1: info: compiling
[INFO] Compiling 9 source files to /home/jenkins/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALAVER/2.11/SPARK_VERSION/2.4.4/label/ubuntu/adam-apis/target/2.11.12/test-classes at 1579896859457
[INFO] prepare-compile in 0 s
[INFO] compile in 4 s
[INFO] 
[INFO] --- maven-compiler-plugin:3.8.0:testCompile (default-testCompile) @ adam-apis-spark2_2.11 ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 8 source files to /home/jenkins/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALAVER/2.11/SPARK_VERSION/2.4.4/label/ubuntu/adam-apis/target/2.11.12/test-classes
[INFO] 
[INFO] --- maven-surefire-plugin:3.0.0-M3:test (default-test) @ adam-apis-spark2_2.11 ---
[INFO] Tests are skipped.
[INFO] 
[INFO] --- scalatest-maven-plugin:2.0.0:test (test) @ adam-apis-spark2_2.11 ---
Discovery starting.
Discovery completed in 180 milliseconds.
Run starting. Expected test count is: 10
JavaADAMContextSuite:
2020-01-24 12:14:25 WARN  Utils:66 - Your hostname, research-jenkins-worker-08 resolves to a loopback address: 127.0.1.1; using 192.168.10.28 instead (on interface eth0)
2020-01-24 12:14:25 WARN  Utils:66 - Set SPARK_LOCAL_IP if you need to bind to another address
2020-01-24 12:14:25 WARN  NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
- can read and write a small .SAM file
- loadIndexedBam with multiple ReferenceRegions
- can read and write a small .SAM file as fragments
- can read and write a small .bed file as features
- can read and write a small .bed file as coverage
- can read and write a small .vcf as genotypes
- can read and write a small .vcf as variants
- can read a two bit file
2020-01-24 12:14:34 WARN  RDDBoundSequenceDataset:190 - asSingleFile = true ignored when saving as Parquet.
- can read and write .fa as sequences
2020-01-24 12:14:34 WARN  RDDBoundSliceDataset:190 - asSingleFile = true ignored when saving as Parquet.
- can read and write .fa as slices
Run completed in 10 seconds, 437 milliseconds.
Total number of tests run: 10
Suites: completed 2, aborted 0
Tests: succeeded 10, failed 0, canceled 0, ignored 0, pending 0
All tests passed.
[INFO] 
[INFO] >>> scoverage-maven-plugin:1.1.1:report (default-cli) > [scoverage]test @ adam-apis-spark2_2.11 >>>
[INFO] 
[INFO] --- maven-enforcer-plugin:1.0:enforce (enforce-versions) @ adam-apis-spark2_2.11 ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.0:enforce (enforce-maven) @ adam-apis-spark2_2.11 ---
[INFO] 
[INFO] --- build-helper-maven-plugin:3.0.0:add-source (add-source) @ adam-apis-spark2_2.11 ---
[INFO] Source directory: /home/jenkins/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALAVER/2.11/SPARK_VERSION/2.4.4/label/ubuntu/adam-apis/src/main/scala added.
[INFO] 
[INFO] --- scalariform-maven-plugin:0.1.4:format (default-cli) @ adam-apis-spark2_2.11 ---
[INFO] Modified 0 of 5 .scala files
[INFO] 
[INFO] --- scoverage-maven-plugin:1.1.1:pre-compile (default-cli) @ adam-apis-spark2_2.11 ---
[INFO] 
[INFO] --- maven-resources-plugin:3.1.0:resources (default-resources) @ adam-apis-spark2_2.11 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/jenkins/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALAVER/2.11/SPARK_VERSION/2.4.4/label/ubuntu/adam-apis/src/main/resources
[INFO] 
[INFO] --- scala-maven-plugin:3.2.2:compile (scala-compile-first) @ adam-apis-spark2_2.11 ---
[INFO] /home/jenkins/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALAVER/2.11/SPARK_VERSION/2.4.4/label/ubuntu/adam-apis/src/main/scala:-1: info: compiling
[INFO] Compiling 4 source files to /home/jenkins/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALAVER/2.11/SPARK_VERSION/2.4.4/label/ubuntu/adam-apis/target/2.11.12/scoverage-classes at 1579896875832
[INFO] [info] Cleaning datadir [/home/jenkins/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALAVER/2.11/SPARK_VERSION/2.4.4/label/ubuntu/adam-apis/target/scoverage-data]
[INFO] [info] Beginning coverage instrumentation
[INFO] [info] Instrumentation completed [265 statements]
[INFO] [info] Wrote instrumentation file [/home/jenkins/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALAVER/2.11/SPARK_VERSION/2.4.4/label/ubuntu/adam-apis/target/scoverage-data/scoverage.coverage.xml]
[INFO] [info] Will write measurement data to [/home/jenkins/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALAVER/2.11/SPARK_VERSION/2.4.4/label/ubuntu/adam-apis/target/scoverage-data]
[WARNING] warning: there were two feature warnings; re-run with -feature for details
[WARNING] one warning found
[INFO] prepare-compile in 0 s
[INFO] compile in 5 s
[INFO] 
[INFO] --- maven-compiler-plugin:3.8.0:compile (default-compile) @ adam-apis-spark2_2.11 ---
[INFO] Nothing to compile - all classes are up to date
[INFO] 
[INFO] --- scoverage-maven-plugin:1.1.1:post-compile (default-cli) @ adam-apis-spark2_2.11 ---
[INFO] 
[INFO] --- build-helper-maven-plugin:3.0.0:add-test-source (add-test-source) @ adam-apis-spark2_2.11 ---
[INFO] Test Source directory: /home/jenkins/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALAVER/2.11/SPARK_VERSION/2.4.4/label/ubuntu/adam-apis/src/test/scala added.
[INFO] 
[INFO] --- maven-resources-plugin:3.1.0:testResources (default-testResources) @ adam-apis-spark2_2.11 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 2 resources
[INFO] 
[INFO] --- scala-maven-plugin:3.2.2:testCompile (scala-test-compile-first) @ adam-apis-spark2_2.11 ---
[INFO] /home/jenkins/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALAVER/2.11/SPARK_VERSION/2.4.4/label/ubuntu/adam-apis/src/test/java:-1: info: compiling
[INFO] /home/jenkins/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALAVER/2.11/SPARK_VERSION/2.4.4/label/ubuntu/adam-apis/src/test/scala:-1: info: compiling
[INFO] /home/jenkins/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALAVER/2.11/SPARK_VERSION/2.4.4/label/ubuntu/adam-apis/target/generated-test-sources/test-annotations:-1: info: compiling
[INFO] Compiling 9 source files to /home/jenkins/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALAVER/2.11/SPARK_VERSION/2.4.4/label/ubuntu/adam-apis/target/2.11.12/test-classes at 1579896881942
[INFO] prepare-compile in 0 s
[INFO] compile in 4 s
[INFO] 
[INFO] --- maven-compiler-plugin:3.8.0:testCompile (default-testCompile) @ adam-apis-spark2_2.11 ---
[INFO] Nothing to compile - all classes are up to date
[INFO] 
[INFO] --- maven-surefire-plugin:3.0.0-M3:test (default-test) @ adam-apis-spark2_2.11 ---
[INFO] Tests are skipped.
[INFO] 
[INFO] --- scalatest-maven-plugin:2.0.0:test (test) @ adam-apis-spark2_2.11 ---
Discovery starting.
Discovery completed in 181 milliseconds.
Run starting. Expected test count is: 10
JavaADAMContextSuite:
2020-01-24 12:14:47 WARN  Utils:66 - Your hostname, research-jenkins-worker-08 resolves to a loopback address: 127.0.1.1; using 192.168.10.28 instead (on interface eth0)
2020-01-24 12:14:47 WARN  Utils:66 - Set SPARK_LOCAL_IP if you need to bind to another address
2020-01-24 12:14:47 WARN  NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
- can read and write a small .SAM file
- loadIndexedBam with multiple ReferenceRegions
- can read and write a small .SAM file as fragments
- can read and write a small .bed file as features
- can read and write a small .bed file as coverage
- can read and write a small .vcf as genotypes
- can read and write a small .vcf as variants
- can read a two bit file
2020-01-24 12:14:56 WARN  RDDBoundSequenceDataset:190 - asSingleFile = true ignored when saving as Parquet.
- can read and write .fa as sequences
2020-01-24 12:14:57 WARN  RDDBoundSliceDataset:190 - asSingleFile = true ignored when saving as Parquet.
- can read and write .fa as slices
Run completed in 10 seconds, 772 milliseconds.
Total number of tests run: 10
Suites: completed 2, aborted 0
Tests: succeeded 10, failed 0, canceled 0, ignored 0, pending 0
All tests passed.
[INFO] 
[INFO] <<< scoverage-maven-plugin:1.1.1:report (default-cli) < [scoverage]test @ adam-apis-spark2_2.11 <<<
[INFO] 
[INFO] --- scoverage-maven-plugin:1.1.1:report (default-cli) @ adam-apis-spark2_2.11 ---
[INFO] [scoverage] Generating cobertura XML report...
[INFO] [scoverage] Generating scoverage XML report...
[INFO] [scoverage] Generating scoverage HTML report...
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building ADAM_2.11: CLI 0.30.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-enforcer-plugin:1.0:enforce (enforce-versions) @ adam-cli-spark2_2.11 ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.0:enforce (enforce-maven) @ adam-cli-spark2_2.11 ---
[INFO] 
[INFO] --- build-helper-maven-plugin:3.0.0:timestamp-property (timestamp-property) @ adam-cli-spark2_2.11 ---
[INFO] 
[INFO] --- git-commit-id-plugin:2.2.2:revision (default) @ adam-cli-spark2_2.11 ---
[INFO] 
[INFO] --- templating-maven-plugin:1.0.0:filter-sources (filter-src) @ adam-cli-spark2_2.11 ---
[INFO] Coping files with filtering to temporary directory.
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] Copied 1 files to output directory: /home/jenkins/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALAVER/2.11/SPARK_VERSION/2.4.4/label/ubuntu/adam-cli/target/generated-sources/java-templates
[INFO] Source directory: /home/jenkins/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALAVER/2.11/SPARK_VERSION/2.4.4/label/ubuntu/adam-cli/target/generated-sources/java-templates added.
[INFO] 
[INFO] --- build-helper-maven-plugin:3.0.0:add-source (add-source) @ adam-cli-spark2_2.11 ---
[INFO] Source directory: /home/jenkins/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALAVER/2.11/SPARK_VERSION/2.4.4/label/ubuntu/adam-cli/src/main/scala added.
[INFO] 
[INFO] --- scalariform-maven-plugin:0.1.4:format (default-cli) @ adam-cli-spark2_2.11 ---
[INFO] Modified 0 of 29 .scala files
[INFO] 
[INFO] --- maven-resources-plugin:3.1.0:resources (default-resources) @ adam-cli-spark2_2.11 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/jenkins/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALAVER/2.11/SPARK_VERSION/2.4.4/label/ubuntu/adam-cli/src/main/resources
[INFO] 
[INFO] --- scala-maven-plugin:3.2.2:compile (scala-compile-first) @ adam-cli-spark2_2.11 ---
[INFO] /home/jenkins/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALAVER/2.11/SPARK_VERSION/2.4.4/label/ubuntu/adam-cli/target/generated-sources/java-templates:-1: info: compiling
[INFO] /home/jenkins/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALAVER/2.11/SPARK_VERSION/2.4.4/label/ubuntu/adam-cli/src/main/scala:-1: info: compiling
[INFO] Compiling 18 source files to /home/jenkins/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALAVER/2.11/SPARK_VERSION/2.4.4/label/ubuntu/adam-cli/target/2.11.12/classes at 1579896899138
[ERROR] /home/jenkins/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALAVER/2.11/SPARK_VERSION/2.4.4/label/ubuntu/adam-cli/src/main/scala/org/bdgenomics/adam/cli/PrintADAM.scala:85: error: overloaded method value jsonEncoder with alternatives:
[ERROR]   (x$1: org.apache.avro.Schema,x$2: java.io.OutputStream,x$3: Boolean)org.apache.avro.io.JsonEncoder <and>
[ERROR]   (x$1: org.apache.avro.Schema,x$2: java.io.OutputStream)org.apache.avro.io.JsonEncoder
[ERROR]  cannot be applied to (org.apache.avro.Schema, java.io.PrintStream, pretty: Boolean)
[ERROR]               val encoder = EncoderFactory.get().jsonEncoder(schema, out, pretty = true)
[ERROR]                                                  ^
[ERROR] one error found
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] ADAM_2.11 .......................................... SUCCESS [ 11.832 s]
[INFO] ADAM_2.11: Avro-to-Dataset codegen utils ........... SUCCESS [ 11.083 s]
[INFO] ADAM_2.11: Core .................................... SUCCESS [15:05 min]
[INFO] ADAM_2.11: APIs for Java, Python ................... SUCCESS [ 44.879 s]
[INFO] ADAM_2.11: CLI ..................................... FAILURE [  4.923 s]
[INFO] ADAM_2.11: Assembly ................................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 16:18 min
[INFO] Finished at: 2020-01-24T12:15:03-08:00
[INFO] Final Memory: 77M/1466M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal net.alchim31.maven:scala-maven-plugin:3.2.2:compile (scala-compile-first) on project adam-cli-spark2_2.11: wrap: org.apache.commons.exec.ExecuteException: Process exited with an error: 1 (Exit value: 1) -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :adam-cli-spark2_2.11
Build step 'Execute shell' marked build as failure
Recording test results
Publishing Scoverage XML and HTML report...
Setting commit status on GitHub for https://github.com/bigdatagenomics/adam/commit/9a16079f5a139ff66e00e34273154a8fc202520b
Finished: FAILURE