FailedConsole Output

Skipping 596 KB.. Full Log
GKaihyfl5xfk5qXrOEBpkDgMEMDIxMFQUlDCw2+gXFyTm2QEAI9P8iI4AAAA=
- load variant contexts with metadata from data frame
- load variants with metadata from data frame
- read a fasta file with short sequences as sequences
- read a fasta file with long sequences as sequences
- read a fasta file with short sequences as slices
- read a fasta file with long sequences as slices
CycleCovariateSuite:
- compute covariates for an unpaired read on the negative strand
- compute covariates for a first-of-pair read on the negative strand
- compute covariates for a second-of-pair read on the negative strand
- compute covariates for an unpaired read on the positive strand
- compute covariates for a first-of-pair read on the positive strand
- compute covariates for a second-of-pair read on the positive strand
ConsensusGeneratorFromKnownsSuite:
- no consensuses for empty target
- no consensuses for reads that don't overlap a target
- return a consensus for read overlapping a single target
RichCigarSuite:
- moving 2 bp from a deletion to a match operator
- moving 2 bp from a insertion to a match operator
- moving 1 base in a two element cigar
- move to start of read
- process right clipped cigar
- process left clipped cigar
- process cigar clipped on both ends
MDTaggingSuite:
- test adding MDTags over boundary
- test adding MDTags; reads span full contig
- test adding MDTags; reads start inside first fragment
- test adding MDTags; reads end inside last fragment
- test adding MDTags; reads start inside first fragment and end inside last fragment
- test adding MDTags; reads start and end in middle fragements
2019-08-10 12:32:44 WARN  BlockManager:66 - Putting block rdd_5_3 failed due to exception java.lang.Exception: Contig chr2 not found in reference map with keys: chr1.
2019-08-10 12:32:44 WARN  BlockManager:66 - Block rdd_5_3 could not be removed as it was not found on disk or in memory
2019-08-10 12:32:44 ERROR Executor:91 - Exception in task 3.0 in stage 2.0 (TID 11)
java.lang.Exception: Contig chr2 not found in reference map with keys: chr1
	at org.bdgenomics.adam.util.ReferenceContigMap.$anonfun$extract$1(ReferenceContigMap.scala:64)
	at scala.collection.immutable.Map$Map1.getOrElse(Map.scala:119)
	at org.bdgenomics.adam.util.ReferenceContigMap.extract(ReferenceContigMap.scala:63)
	at org.bdgenomics.adam.rdd.read.MDTagging.$anonfun$addMDTagsBroadcast$3(MDTagging.scala:76)
	at scala.Option$WithFilter.map(Option.scala:163)
	at org.bdgenomics.adam.rdd.read.MDTagging.$anonfun$addMDTagsBroadcast$1(MDTagging.scala:71)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at org.apache.spark.storage.memory.MemoryStore.putIterator(MemoryStore.scala:222)
	at org.apache.spark.storage.memory.MemoryStore.putIteratorAsValues(MemoryStore.scala:299)
	at org.apache.spark.storage.BlockManager.$anonfun$doPutIterator$1(BlockManager.scala:1165)
	at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1091)
	at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1156)
	at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:882)
	at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:335)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:286)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:121)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:411)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
2019-08-10 12:32:44 WARN  TaskSetManager:66 - Lost task 3.0 in stage 2.0 (TID 11, localhost, executor driver): java.lang.Exception: Contig chr2 not found in reference map with keys: chr1
	at org.bdgenomics.adam.util.ReferenceContigMap.$anonfun$extract$1(ReferenceContigMap.scala:64)
	at scala.collection.immutable.Map$Map1.getOrElse(Map.scala:119)
	at org.bdgenomics.adam.util.ReferenceContigMap.extract(ReferenceContigMap.scala:63)
	at org.bdgenomics.adam.rdd.read.MDTagging.$anonfun$addMDTagsBroadcast$3(MDTagging.scala:76)
	at scala.Option$WithFilter.map(Option.scala:163)
	at org.bdgenomics.adam.rdd.read.MDTagging.$anonfun$addMDTagsBroadcast$1(MDTagging.scala:71)
	at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
	at org.apache.spark.storage.memory.MemoryStore.putIterator(MemoryStore.scala:222)
	at org.apache.spark.storage.memory.MemoryStore.putIteratorAsValues(MemoryStore.scala:299)
	at org.apache.spark.storage.BlockManager.$anonfun$doPutIterator$1(BlockManager.scala:1165)
	at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1091)
	at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1156)
	at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:882)
	at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:335)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:286)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:121)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:411)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

2019-08-10 12:32:44 ERROR TaskSetManager:70 - Task 3 in stage 2.0 failed 1 times; aborting job
- try realigning a read on a missing contig, stringency == STRICT
2019-08-10 12:32:45 WARN  MDTagging:190 - Caught exception when processing read chr2: java.lang.Exception: Contig chr2 not found in reference map with keys: chr1
- try realigning a read on a missing contig, stringency == LENIENT
FileExtensionsSuite:
- ends in gzip extension
- is a vcf extension
PhredUtilsSuite:
- convert low phred score to log and back
- convert high phred score to log and back
- convert overflowing phred score to log and back and clip
- convert negative zero log probability to phred and clip
- round trip log probabilities
ReadDatasetSuite:
- create a new read genomic dataset
- create a new read genomic dataset with sequence dictionary
- save as parquet
- round trip as parquet
- save as fastq
- save as single file fastq
- filter read genomic dataset by reference region
- broadcast region join reads and features
- shuffle region join reads and features
- convert reads to alignments
- convert reads to sequences
- convert reads to slices
SmithWatermanSuite:
- gather max position from simple scoring matrix
- gather max position from irregular scoring matrix
- gather max position from irregular scoring matrix with deletions
- score simple alignment with constant gap
- score irregular scoring matrix
- score irregular scoring matrix with indel
- can unroll cigars correctly
- execute simple trackback
- execute trackback with indel
- run end to end smith waterman for simple reads
- run end to end smith waterman for short sequences with indel
- run end to end smith waterman for longer sequences with snp
- run end to end smith waterman for longer sequences with short indel
- run end to end smith waterman for shorter sequence in longer sequence
- run end to end smith waterman for shorter sequence in longer sequence, with indel
- smithWaterman - simple alignment
MdTagSuite:
- null md tag
- zero length md tag
- md tag with non-digit initial value
- md tag invalid base
- md tag, pure insertion
- md tag, pure insertion, test 2
- md tag pure insertion equality
- md tag equality and hashcode
- valid md tags
- get start of read with no mismatches or deletions
- get start of read with no mismatches, but with a deletion at the start
- get start of read with mismatches at the start
- get end of read with no mismatches or deletions
- check that mdtag and rich record return same end
- get end of read with no mismatches, but a deletion at end
- CIGAR with N operator
- CIGAR with multiple N operators
- CIGAR with P operators
- Get correct matches for mdtag with insertion
- Get correct matches for mdtag with mismatches and insertion
- Get correct matches for mdtag with insertion between mismatches
- Get correct matches for mdtag with intron between mismatches
- Get correct matches for mdtag with intron and deletion between mismatches
- Throw exception when number of deleted bases in mdtag disagrees with CIGAR
- Get correct matches for mdtag with mismatch, insertion and deletion
- Get correct matches for mdtag with mismatches, insertion and deletion
- Get correct matches for MDTag with mismatches and deletions
- Get correct matches base from MDTag and CIGAR with N
- get end of read with mismatches and a deletion at end
- get correct string out of mdtag with no mismatches
- get correct string out of mdtag with mismatches at start
- get correct string out of mdtag with deletion at end
- get correct string out of mdtag with mismatches at end
- get correct string out of complex mdtag
- check complex mdtag
- get gapped reference
- move a cigar alignment by two for a read
- rewrite alignment to all matches
- rewrite alignment to two mismatches followed by all matches
- rewrite alignment to include a deletion but otherwise all matches
- rewrite alignment to include an insertion at the start of the read but otherwise all matches
- create new md tag from read vs. reference, perfect match
- create new md tag from read vs. reference, perfect alignment match, 1 mismatch
- create new md tag from read vs. reference, alignment with deletion
- create new md tag from read vs. reference, alignment with insert
- handle '=' and 'X' operators
- CIGAR/MD tag mismatch should cause errors
GenomicDatasetSuite:
- processing a command that is the spark root directory should return an absolute path
- processing a command that is just a single word should do nothing
- processing a command should handle arguments that include spaces
- processing a command that is a single substitution should succeed
- processing a command that is multiple words should split the string
- process a command that is multiple words with a replacement
- process a command that is multiple words with multiple replacements
ParallelFileMergerSuite:
- cannot write both empty gzip block and cram eof
- buffer size must be non-negative
- get the size of several files
- block size must be positive and non-zero when trying to merge files
- must provide files to merge
- if two files are both below the block size, they should merge into one shard
- merge two files where one is greater than the block size
- merge a sharded sam file
- merge a sharded bam file
- merge a sharded cram file
- can't turn a negative index into a path
- generate a path from an index
IndelTableSuite:
- check for indels in a region with known indels
- check for indels in a contig that doesn't exist
- check for indels in a region without known indels
- build indel table from rdd of variants
SnpTableSuite:
- create an empty snp table
- create a snp table from variants on multiple contigs
- create a snp table from a larger set of variants
- perform lookups on multi-contig snp table
- perform lookups on larger snp table
RealignIndelsSuite:
- map reads to targets
- checking mapping to targets for artificial reads
- checking alternative consensus for artificial reads
- checking extraction of reference from reads
- checking realigned reads for artificial input
- checking realigned reads for artificial input with reference file
- checking realigned reads for artificial input using knowns
- checking realigned reads for artificial input using knowns and reads
- skip realigning reads if target is highly covered
- skip realignment if target is an insufficient LOD improvement
- realign reads to an insertion
- test mismatch quality scoring
- test mismatch quality scoring for no mismatches
- test mismatch quality scoring for offset
- test mismatch quality scoring with early exit
- test mismatch quality scoring after unpacking read
- we shouldn't try to realign a region with no target
- we shouldn't try to realign reads with no indel evidence
- test OP and OC tags
- realign a read with an insertion that goes off the end of the read
- if realigning a target doesn't improve the LOD, don't drop reads
- extract seq/qual from a read with no clipped bases
- extract seq/qual from a read with clipped bases at start
- extract seq/qual from a read with clipped bases at end
- if unclip is selected, don't drop base when extracting from a read with clipped bases
- get cigar and coordinates for read that spans indel, no clipped bases
- get cigar and coordinates for read that spans deletion, clipped bases at start
- get cigar and coordinates for read that falls wholly before insertion
- get cigar and coordinates for read that falls wholly after insertion
- get cigar and coordinates for read that falls wholly after deletion
- get cigar and coordinates for read that partially spans insertion, no clipped bases
- get cigar and coordinates for read that partially spans insertion, clipped bases at end
- get cigar and coordinates for read that partially spans insertion, clipped bases both ends
BaseQualityRecalibrationSuite:
- BQSR Test Input #1 w/ VCF Sites without caching
- BQSR Test Input #1 w/ VCF Sites with caching
- BQSR Test Input #1 w/ VCF Sites with serialized caching
DinucCovariateSuite:
- computing dinucleotide pairs for a single base sequence should return (N,N)
- compute dinucleotide pairs for a string of all valid bases
- compute dinucleotide pairs for a string with an N
- compute covariates for a read on the negative strand
- compute covariates for a read on the positive strand
SequenceDictionarySuite:
- Convert from sam sequence record and back
- Convert from SAM sequence dictionary file (with extra fields)
- merge into existing dictionary
- Convert from SAM sequence dictionary and back
- Can retrieve sequence by name
- SequenceDictionary's with same single element are equal
- SequenceDictionary's with same two elements are equals
- SequenceDictionary's with different elements are unequal
- SequenceDictionaries with same elements in different order are compatible
- isCompatible tests equality on overlap
- The addition + works correctly
- The append operation ++ works correctly
- ContainsRefName works correctly for different string types
- Apply on name works correctly for different String types
- convert from sam sequence record and back
- convert from sam sequence dictionary and back
- conversion to sam sequence dictionary has correct sort order
- load sequence dictionary from VCF file
- empty sequence dictionary must be empty
- test filter to reference name
- test filter to reference names
- test filter to reference name by function
GenomicPositionPartitionerSuite:
- partitions the UNMAPPED ReferencePosition into the top partition
- if we do not have a contig for a record, we throw an IAE
- partitioning into N pieces on M total sequence length, where N > M, results in M partitions
- correctly partitions a single dummy sequence into two pieces
- correctly counts cumulative lengths
- correctly partitions positions across two dummy sequences
- test that we can range partition ADAMRecords
- test that we can range partition ADAMRecords indexed by sample
- test that simple partitioning works okay on a reasonable set of ADAMRecords
- test indexed ReferencePosition partitioning works on a set of indexed ADAMRecords
CoverageSuite:
- Convert to coverage from valid Feature
- Convert to coverage from valid Feature with sampleId
- Convert to coverage from Feature with null/empty contigName fails with correct error
- Convert to coverage from Feature with no start/end position fails with correct error
- Convert to coverage from Feature with no score fails with correct error
InnerTreeRegionJoinSuite:
- Ensure same reference regions get passed together
- Overlapping reference regions
- Multiple reference regions do not throw exception
RichAlignmentRecordSuite:
- Unclipped Start
- Unclipped End
- tags contains optional fields
- read overlap unmapped read
- read overlap reference position
- read overlap same position different contig
VariantDatasetSuite:
- union two variant genomic datasets together
- round trip to parquet
- save and reload from partitioned parquet
- use broadcast join to pull down variants mapped to targets
- use right outer broadcast join to pull down variants mapped to targets
- use shuffle join to pull down variants mapped to targets
- use right outer shuffle join to pull down variants mapped to targets
- use left outer shuffle join to pull down variants mapped to targets
- use full outer shuffle join to pull down variants mapped to targets
- use shuffle join with group by to pull down variants mapped to targets
- use right outer shuffle join with group by to pull down variants mapped to targets
- convert back to variant contexts
- load parquet to sql, save, re-read from avro
2019-08-10 12:33:37 WARN  DatasetBoundSliceDataset:190 - Saving directly as Parquet from SQL. Options other than compression codec are ignored.
- transform variants to slice genomic dataset
- transform variants to coverage genomic dataset
- transform variants to feature genomic dataset
- transform variants to fragment genomic dataset
- transform variants to read genomic dataset
- transform variants to genotype genomic dataset
- transform variants to variant context genomic dataset
- filter RDD bound variants to filters passed
- filter dataset bound variants to filters passed
- filter RDD bound variants by quality
- filter dataset bound variants by quality
- filter RDD bound variants by read depth
- filter dataset bound variants by read depth
- filter RDD bound variants by reference read depth
- filter dataset bound variants by reference read depth
- filter RDD bound single nucleotide variants
- filter dataset bound single nucleotide variants
- filter RDD bound multiple nucleotide variants
- filter dataset bound multiple nucleotide variants
- filter RDD bound indel variants
- filter dataset bound indel variants
- filter RDD bound variants to single nucleotide variants
- filter dataset bound variants to single nucleotide variants
- filter RDD bound variants to multiple nucleotide variants
- filter dataset bound variants to multiple nucleotide variants
- filter RDD bound variants to indel variants
- filter dataset bound variants to indel variants
- transform dataset via java API
Run completed in 6 minutes, 32 seconds.
Total number of tests run: 1175
Suites: completed 68, aborted 0
Tests: succeeded 1174, failed 1, canceled 0, ignored 5, pending 0
*** 1 TEST FAILED ***
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] ADAM_2.12 .......................................... SUCCESS [ 11.667 s]
[INFO] ADAM_2.12: Shader workaround ....................... SUCCESS [  5.766 s]
[INFO] ADAM_2.12: Avro-to-Dataset codegen utils ........... SUCCESS [  6.633 s]
[INFO] ADAM_2.12: Core .................................... FAILURE [07:38 min]
[INFO] ADAM_2.12: APIs for Java, Python ................... SKIPPED
[INFO] ADAM_2.12: CLI ..................................... SKIPPED
[INFO] ADAM_2.12: Assembly ................................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 08:03 min
[INFO] Finished at: 2019-08-10T12:33:53-07:00
[INFO] Final Memory: 61M/1502M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.scalatest:scalatest-maven-plugin:2.0.0:test (test) on project adam-core-spark2_2.12: There are test failures -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :adam-core-spark2_2.12
Build step 'Execute shell' marked build as failure
Recording test results
Publishing Scoverage XML and HTML report...
null
Setting commit status on GitHub for https://github.com/bigdatagenomics/adam/commit/70fa095272d8c033cfbc4f9d55466c46baa60179
Finished: FAILURE