FailedConsole Output

Skipping 127 KB.. Full Log
and md tag from a single deletion
- make a cigar and md tag from a single insertion
- make a cigar for a match followed by a deletion
- make a cigar for an insertion flanked by matches
- make a cigar for a match followed by a mismatch
- make a cigar for a multi-base mismatch flanked by matches
- make a cigar for a match after a clip
- make a cigar for a mismatch after a clip
- extract reference from a single snp
- extract reference from a single deletion
- extract reference from a single insertion
- extract reference from a soft clipped sequence
- extract reference from a hard clipped sequence
- extract reference from a match flanked deletion
- extract reference from a match flanked insertion
- read must be mapped to extract alignment operators
- extracting alignment operators will fail if cigar is unset
- extracting alignment operators will fail if cigar is *
- extracting alignment operators will fail if MD tag is unset
- extract alignment operators from a perfect read
- extract alignment operators from a read with a single mismatch
- extract alignment operators from a read with a single deletion
- extract alignment operators from a read with a single insertion
LogUtilsSuite:
- test our nifty log summer
- can we compute the sum of logs correctly?
- can we compute the additive inverse of logs correctly?
ObservationSuite:
- cannot create an observation with empty likelihoods
- cannot create an observation with 1-length likelihoods
- cannot create an observation with mismatching likelihood lengths
- forward strand must be >= 0
- forward strand cannot exceed coverage
- square map-q must be >= 0
- coverage is strictly positive
- invert an observation
- null an observation
RealignmentBlockSuite:
- folding over a clip returns the clip operator, soft clip
- folding over a clip returns the clip operator, hard clip
- folding over a canonical block returns the original alignment
- violate an invariant of the fold function, part 1
- violate an invariant of the fold function, part 2
- apply the fold function on a realignable block
- having a clip in the middle of a read is illegal
- can't have two soft clips back to back
- a read that is an exact sequence match is canonical
- hard clip before soft clip is ok at start of read
- hard clip after soft clip is ok at end of read
- a read with a single snp is canonical
- a read containing an indel with exact flanks is wholly realignable
- a read containing an indel with exact flanks is wholly realignable, with soft clipped bases
- a read containing an indel with longer flanks can be split into multiple blocks
- a read containing an indel with longer flanks on both sides can be split into multiple blocks
- properly handle a read that starts with a long soft clip
JointAnnotatorCallerSuite:
- discard reference site
- calculate MAF for all called genotypes
- calculate MAF ignoring uncalled genotypes
- roll up variant annotations from a single genotype
- roll up variant annotations across multiple genotypes
- recalling genotypes is a no-op for no calls and complex hets
- recall a genotype so that the state changes
- allele frequency being outside of (0.0, 1.0) just computes posteriors
- compute variant quality from a single genotype
- compute variant quality from multiple genotypes
CopyNumberMapSuite:
- create an empty map
- create a map with only diploid features
- create a map with a mix of features
PrefilterReadsSuite:
- filter on read uniqueness
- filter unmapped reads
- filter autosomal chromosomes with grc names
- filter sex chromosomes with grc names
- filter mitochondrial chromosome with a grc names
- filter autosomal chromosomes with hg names
- filter sex chromosomes with hg names
- filter mitochondrial chromosome with a hg names
- filter autosomal chromosomes from generator
- filter autosomal + sex chromosomes from generator
- filter all chromosomes from generator
- update a read whose mate is mapped to a filtered contig
- filter reads mapped to autosomal chromosomes from generator
- filter reads mapped to autosomal + sex chromosomes from generator
- filter reads mapped to all chromosomes from generator
- filter reads uniquely mapped to autosomal chromosomes from generator
- filter reads uniquely mapped to autosomal + sex chromosomes from generator
- filter reads uniquely mapped to all chromosomes from generator
- filter rdd of reads mapped to autosomal chromosomes from generator
- filter rdd of reads mapped to autosomal + sex chromosomes from generator
- filter rdd of reads mapped to all chromosomes from generator
- filter rdd of reads uniquely mapped to autosomal chromosomes from generator
- filter rdd of reads uniquely mapped to autosomal + sex chromosomes from generator
- filter rdd of reads uniquely mapped to all chromosomes from generator
Run completed in 28 minutes, 5 seconds.
Total number of tests run: 282
Suites: completed 23, aborted 0
Tests: succeeded 282, failed 0, canceled 0, ignored 1, pending 0
All tests passed.
[INFO] 
[INFO] >>> scoverage-maven-plugin:1.1.1:report (default-cli) > [scoverage]test @ avocado-core_2.11 >>>
[INFO] 
[INFO] --- maven-enforcer-plugin:1.0:enforce (enforce-maven) @ avocado-core_2.11 ---
[INFO] 
[INFO] --- build-helper-maven-plugin:1.10:add-source (add-source) @ avocado-core_2.11 ---
[INFO] Source directory: /home/jenkins/workspace/avocado/HADOOP_VERSION/2.3.0/SCALAVER/2.11/SPARK_VERSION/2.2.0/label/ubuntu/avocado-core/src/main/scala added.
[INFO] 
[INFO] --- scalariform-maven-plugin:0.1.4:format (default-cli) @ avocado-core_2.11 ---
[INFO] Modified 0 of 49 .scala files
[INFO] 
[INFO] --- scoverage-maven-plugin:1.1.1:pre-compile (default-cli) @ avocado-core_2.11 ---
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ avocado-core_2.11 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/jenkins/workspace/avocado/HADOOP_VERSION/2.3.0/SCALAVER/2.11/SPARK_VERSION/2.2.0/label/ubuntu/avocado-core/src/main/resources
[INFO] 
[INFO] --- scala-maven-plugin:3.2.2:compile (scala-compile-first) @ avocado-core_2.11 ---
[WARNING]  Expected all dependencies to require Scala version: 2.11.4
[WARNING]  org.bdgenomics.utils:utils-misc-spark2_2.11:0.2.11 requires scala version: 2.11.4
[WARNING]  org.bdgenomics.utils:utils-metrics-spark2_2.11:0.2.11 requires scala version: 2.11.4
[WARNING]  org.bdgenomics.utils:utils-serialization-spark2_2.11:0.2.11 requires scala version: 2.11.4
[WARNING]  org.bdgenomics.utils:utils-misc-spark2_2.11:0.2.11 requires scala version: 2.11.4
[WARNING]  org.bdgenomics.utils:utils-io-spark2_2.11:0.2.13 requires scala version: 2.11.8
[WARNING] Multiple versions of scala libraries detected!
[INFO] /home/jenkins/workspace/avocado/HADOOP_VERSION/2.3.0/SCALAVER/2.11/SPARK_VERSION/2.2.0/label/ubuntu/avocado-core/src/main/scala:-1: info: compiling
[INFO] Compiling 26 source files to /home/jenkins/workspace/avocado/HADOOP_VERSION/2.3.0/SCALAVER/2.11/SPARK_VERSION/2.2.0/label/ubuntu/avocado-core/target/scala-2.11.4/scoverage-classes at 1568033079196
[INFO] [info] Cleaning datadir [/home/jenkins/workspace/avocado/HADOOP_VERSION/2.3.0/SCALAVER/2.11/SPARK_VERSION/2.2.0/label/ubuntu/avocado-core/target/scoverage-data]
[INFO] [info] Beginning coverage instrumentation
[WARNING] [warn] Could not instrument [EmptyTree$/null]. No pos.
[WARNING] [warn] Could not instrument [EmptyTree$/null]. No pos.
[WARNING] [warn] Could not instrument [EmptyTree$/null]. No pos.
[INFO] [info] Instrumentation completed [2625 statements]
[INFO] [info] Wrote instrumentation file [/home/jenkins/workspace/avocado/HADOOP_VERSION/2.3.0/SCALAVER/2.11/SPARK_VERSION/2.2.0/label/ubuntu/avocado-core/target/scoverage-data/scoverage.coverage.xml]
[INFO] [info] Will write measurement data to [/home/jenkins/workspace/avocado/HADOOP_VERSION/2.3.0/SCALAVER/2.11/SPARK_VERSION/2.2.0/label/ubuntu/avocado-core/target/scoverage-data]
[WARNING] warning: there were three deprecation warnings; re-run with -deprecation for details
[WARNING] one warning found
[INFO] prepare-compile in 0 s
[INFO] compile in 13 s
[INFO] 
[INFO] --- maven-compiler-plugin:3.5.1:compile (default-compile) @ avocado-core_2.11 ---
[INFO] Nothing to compile - all classes are up to date
[INFO] 
[INFO] --- scoverage-maven-plugin:1.1.1:post-compile (default-cli) @ avocado-core_2.11 ---
[INFO] 
[INFO] --- build-helper-maven-plugin:1.10:add-test-source (add-test-source) @ avocado-core_2.11 ---
[INFO] Test Source directory: /home/jenkins/workspace/avocado/HADOOP_VERSION/2.3.0/SCALAVER/2.11/SPARK_VERSION/2.2.0/label/ubuntu/avocado-core/src/test/scala added.
[INFO] 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ avocado-core_2.11 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 26 resources
[INFO] 
[INFO] --- scala-maven-plugin:3.2.2:testCompile (scala-test-compile-first) @ avocado-core_2.11 ---
[WARNING]  Expected all dependencies to require Scala version: 2.11.4
[WARNING]  org.bdgenomics.utils:utils-misc-spark2_2.11:0.2.11 requires scala version: 2.11.4
[WARNING]  org.bdgenomics.utils:utils-metrics-spark2_2.11:0.2.11 requires scala version: 2.11.4
[WARNING]  org.bdgenomics.utils:utils-serialization-spark2_2.11:0.2.11 requires scala version: 2.11.4
[WARNING]  org.bdgenomics.utils:utils-misc-spark2_2.11:0.2.11 requires scala version: 2.11.4
[WARNING]  org.bdgenomics.utils:utils-io-spark2_2.11:0.2.13 requires scala version: 2.11.8
[WARNING] Multiple versions of scala libraries detected!
[INFO] Nothing to compile - all classes are up to date
[INFO] 
[INFO] --- maven-compiler-plugin:3.5.1:testCompile (default-testCompile) @ avocado-core_2.11 ---
[INFO] Nothing to compile - all classes are up to date
[INFO] 
[INFO] --- maven-surefire-plugin:2.7:test (default-test) @ avocado-core_2.11 ---
[INFO] Tests are skipped.
[INFO] 
[INFO] --- scalatest-maven-plugin:1.0:test (test) @ avocado-core_2.11 ---
Discovery starting.
Discovery completed in 880 milliseconds.
Run starting. Expected test count is: 282
LogPhredSuite:
- convert log error probabilities to phred scores
TreeRegionJoinSuite:
- build a forest with a single item and retrieve data
- build a forest with data from a single contig and retrieve data
- build a forest with data from multiple contigs and retrieve data
2019-09-09 05:44:56 WARN  NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2019-09-09 05:44:56 WARN  Utils:66 - Your hostname, amp-jenkins-staging-worker-02 resolves to a loopback address: 127.0.1.1; using 192.168.10.32 instead (on interface eno1)
2019-09-09 05:44:56 WARN  Utils:66 - Set SPARK_LOCAL_IP if you need to bind to another address
- build a forest out of data on a single contig and retrieve data
- run a join between data on a single contig
HardLimiterSuite:
- add a read to an empty buffer
- add a read to a non-empty buffer, without moving forward
- add a read to a non-empty buffer, and move forward
- trying to add a read to a full buffer—without moving forward—drops the read
- add a read to a full buffer, while moving forward and keeping buffer full
- add a read to a full buffer, while moving forward and emptying buffer
- adding an out of order read should fire an assert
- adding a read that is on the wrong contig should fire an assert
- apply hard limiting to an iterator that is wholly under the coverage limit
- apply hard limiting to an iterator that is partially under the coverage limit
- apply hard limiting to an iterator that is wholly over the coverage limit
- apply hard limiting on a file that is wholly under the coverage limit
- apply hard limiting on a file with sections over the coverage limit
VariantSummarySuite:
- create from genotype without strand bias components
- create from genotype with strand bias components
- invalid strand bias causes exception
- merge two fully populated summaries
- merge two partially populated summaries
- populating an annotation should carry old fields
RewriteHetsSuite:
- should rewrite a bad het snp
- should not rewrite het snp if snp filtering is disabled
- should rewrite a bad het indel
- should not rewrite het indel if indel filtering is disabled
- don't rewrite good het calls
- don't rewrite homozygous calls
- rewrite a het call as a hom alt snp
- processing a valid call should not change the call
- if processing is disabled, don't rewrite bad calls
- process a bad het snp call
- process a bad het indel call
- disable processing for a whole rdd
- process a whole rdd
RealignerSuite:
- realignment candidate code needs at least one block
- read is not a realignment candidate if it is canonical
- read is not a realignment candidate if it is canonical and clipped
- read is a realignment candidate if there is at least one non-canonical block
- realign an indel that is not left normalized
- realign a mnp expressed as a complex indel
- realign two snps expressed as a complex indel
- align sequence with a complex deletion
- realign a read with a complex deletion
- realign a read with a snp and deletion separated by a flank
- realigning a repetative read will fire an assert
- realign a set of reads around an insert
- realign a set of reads around a deletion
2019-09-09 05:45:02 WARN  Realigner:101 - Realigning A_READ failed with exception java.lang.AssertionError: assertion failed: Input sequence contains a repeat..
- realigning a read with a repeat will return the original read
- one sample read should fail due to a repeat, all others should realign
HardFilterGenotypesSuite:
- filter out reference calls
- filter out low quality calls
- filter out genotypes for emission
- filter out genotypes with a low quality per depth
- filter out genotypes with a low depth
- filter out genotypes with a high depth
- filter out genotypes with a low RMS mapping quality
- filter out genotypes with a high strand bias
- update genotype where no filters were applied
- update genotype where filters were applied and passed
- update genotype where filters were applied and failed
- discard a ref genotype call
- keep a ref genotype call
- discard a genotype whose quality is too low
- build filters and apply to snp
- build filters and apply to indel
- test adding filters
- filter out genotypes with a low allelic fraction
- filter out genotypes with a high allelic fraction
TrioCallerSuite:
- cannot have a sample with no record groups
- cannot have a sample with discordant sample ids
- extract id from a single read group
- extract id from multiple read groups
- filter an empty site
- filter a site with only ref calls
- keep a site with a non-ref call
- fill in no-calls for site with missing parents
- pass through site with odd copy number
- confirm call at site where proband and parents are consistent and phase
- confirm call at site where proband and parents are consistent but cannot phase
- invalidate call at site where proband and parents are inconsistent
- end-to-end trio call test
BlockSuite:
- folding over a match block returns a match operator
- an unknown block must have mismatching input sequences
- folding over an unknown block returns our function's result
AlignerSuite:
- aligning a repetative sequence will fire an assert
- align a minimally flanked sequence with a snp
- align a minimally flanked sequence with a 3 bp mnp
- align a minimally flanked sequence with 2 snps separated by 1bp
- align a minimally flanked sequence with 2 snps separated by 3bp
- align a minimally flanked sequence with a simple insert
- align a minimally flanked sequence with a complex insert
- align a minimally flanked sequence with a simple deletion
- align a minimally flanked sequence that contains a discordant k-mer pair
- align a minimally flanked sequence with a complex deletion
- align a minimally flanked sequence with 2 snps separated by two matching k-mers
- align a minimally flanked sequence with a snp and an indel separated by one matching k-mer
- zip and trim short insert
- zip and trim short deletion
- cut up a sequence that is longer than the k-mer length
- cutting up a sequence that is shorter than the k-mer length yields an empty map
- cutting up a repeated sequence throws an assert
- get no indices if we have no intersection
- get correct index for a single intersection
- get correct indices for two k-mers in a row
- get correct indices for two k-mers separated by a snp
- get correct indices for two k-mers separated by an indel
- get correct indices for two k-mers whose positions are flipped
- fire assert when cutting up repeatitive reads
- fire assert when checking negative index pair
- a set of a single index pair is concordant
- a set with a pair of index pairs is concordant
- a set with multiple good index pairs is concordant
- a set with a pair of swapped index pairs is discordant
- a set with a pair of both con/discordant index pairs is discordant
- making blocks from no indices returns a single unknown block
- make blocks from a single match between two snps
- make blocks from three matches between two snps
- make blocks from three matches between two indels, opposite events
- make blocks from three matches between two indels, same events
- make blocks from matches between snp/indel/snp
BiallelicGenotyperSuite:
- properly handle haploid genotype state
- properly handle diploid genotype state with het call
- properly handle triploid genotype state with hom alt call
- scoring read that overlaps no variants should return empty observations in variant only mode
- scoring read that overlaps no variants should return empty observations
2019-09-09 05:45:09 WARN  TaskSetManager:66 - Stage 5 contains a task of very large size (4472 KB). The maximum recommended task size is 100 KB.
2019-09-09 05:45:09 WARN  Utils:66 - Truncated the string representation of a plan since it was too large. This behavior can be adjusted by setting 'spark.debug.maxToStringFields' in SparkEnv.conf.
- score snps in a read overlapping a copy number dup boundary
2019-09-09 05:45:29 WARN  TaskSetManager:66 - Stage 5 contains a task of very large size (4060 KB). The maximum recommended task size is 100 KB.
- score snps in a read overlapping a copy number del boundary
- score snp in a read with no evidence of the snp
- score snp in a read with evidence of the snp
- score snp in a read with evidence of the snp, and non-variant bases
- build genotype for het snp
- force call possible STR/indel !!! IGNORED !!!
- log space factorial
- fisher test for strand bias
2019-09-09 05:45:47 WARN  BiallelicGenotyper:170 - Input RDD is not persisted. Performance may be degraded.
2019-09-09 05:45:54 WARN  TaskSetManager:66 - Stage 5 contains a task of very large size (10813 KB). The maximum recommended task size is 100 KB.
- discover and call simple SNP
2019-09-09 05:47:12 WARN  BiallelicGenotyper:170 - Input RDD is not persisted. Performance may be degraded.
2019-09-09 05:47:20 WARN  TaskSetManager:66 - Stage 5 contains a task of very large size (10813 KB). The maximum recommended task size is 100 KB.
- discover and call simple SNP and score all sites
2019-09-09 05:48:45 WARN  BiallelicGenotyper:170 - Input RDD is not persisted. Performance may be degraded.
2019-09-09 05:48:54 WARN  TaskSetManager:66 - Stage 5 contains a task of very large size (10813 KB). The maximum recommended task size is 100 KB.
- discover and call short indel
2019-09-09 05:50:18 WARN  BiallelicGenotyper:170 - Input RDD is not persisted. Performance may be degraded.
2019-09-09 05:50:27 WARN  TaskSetManager:66 - Stage 5 contains a task of very large size (10813 KB). The maximum recommended task size is 100 KB.
2019-09-09 05:50:41 ERROR BiallelicGenotyper:387 - Processing read H06JUADXX130110:1:1109:10925:52628 failed with exception java.lang.StringIndexOutOfBoundsException: String index out of range: 0. Skipping...
2019-09-09 05:50:41 ERROR BiallelicGenotyper:387 - Processing read H06JUADXX130110:1:1116:7369:15293 failed with exception java.lang.StringIndexOutOfBoundsException: String index out of range: 0. Skipping...
2019-09-09 05:50:41 ERROR BiallelicGenotyper:387 - Processing read H06HDADXX130110:2:1115:12347:40533 failed with exception java.lang.StringIndexOutOfBoundsException: String index out of range: 0. Skipping...
2019-09-09 05:50:41 ERROR BiallelicGenotyper:387 - Processing read H06HDADXX130110:1:2110:7844:95190 failed with exception java.lang.StringIndexOutOfBoundsException: String index out of range: 0. Skipping...
2019-09-09 05:50:41 ERROR BiallelicGenotyper:387 - Processing read H06HDADXX130110:1:2203:13041:33390 failed with exception java.lang.StringIndexOutOfBoundsException: String index out of range: 0. Skipping...
- discover and call het and hom snps
- score a single read covering a deletion
2019-09-09 05:51:59 WARN  TaskSetManager:66 - Stage 7 contains a task of very large size (10813 KB). The maximum recommended task size is 100 KB.
- discover and force call hom alt deletion
2019-09-09 05:53:28 WARN  TaskSetManager:66 - Stage 7 contains a task of very large size (10813 KB). The maximum recommended task size is 100 KB.
- call hom alt AGCCAGTGGACGCCGACCT->A deletion at 1/875159
2019-09-09 05:54:49 WARN  BiallelicGenotyper:170 - Input RDD is not persisted. Performance may be degraded.
2019-09-09 05:54:57 WARN  TaskSetManager:66 - Stage 5 contains a task of very large size (10813 KB). The maximum recommended task size is 100 KB.
- call hom alt TACACACACACACACACACACACACACACAC->T deletion at 1/1777263
2019-09-09 05:56:19 WARN  BiallelicGenotyper:170 - Input RDD is not persisted. Performance may be degraded.
2019-09-09 05:56:28 WARN  TaskSetManager:66 - Stage 5 contains a task of very large size (10813 KB). The maximum recommended task size is 100 KB.
- call hom alt CAG->C deletion at 1/1067596
2019-09-09 05:57:55 WARN  BiallelicGenotyper:170 - Input RDD is not persisted. Performance may be degraded.
2019-09-09 05:58:03 WARN  TaskSetManager:66 - Stage 5 contains a task of very large size (10813 KB). The maximum recommended task size is 100 KB.
- call hom alt C->G snp at 1/877715
2019-09-09 05:59:28 WARN  BiallelicGenotyper:170 - Input RDD is not persisted. Performance may be degraded.
2019-09-09 05:59:37 WARN  TaskSetManager:66 - Stage 5 contains a task of very large size (10813 KB). The maximum recommended task size is 100 KB.
- call hom alt ACAG->A deletion at 1/886049
2019-09-09 06:01:02 WARN  BiallelicGenotyper:170 - Input RDD is not persisted. Performance may be degraded.
2019-09-09 06:01:10 WARN  TaskSetManager:66 - Stage 5 contains a task of very large size (10813 KB). The maximum recommended task size is 100 KB.
- call hom alt GA->CC mnp at 1/889158–9
2019-09-09 06:02:33 WARN  BiallelicGenotyper:170 - Input RDD is not persisted. Performance may be degraded.
2019-09-09 06:02:41 WARN  TaskSetManager:66 - Stage 14 contains a task of very large size (10813 KB). The maximum recommended task size is 100 KB.
- call hom alt C->CCCCT insertion at 1/866511
2019-09-09 06:02:48 WARN  BiallelicGenotyper:170 - Input RDD is not persisted. Performance may be degraded.
2019-09-09 06:02:56 WARN  TaskSetManager:66 - Stage 5 contains a task of very large size (10813 KB). The maximum recommended task size is 100 KB.
- call het ATG->A deletion at 1/905130
2019-09-09 06:04:18 WARN  BiallelicGenotyper:170 - Input RDD is not persisted. Performance may be degraded.
2019-09-09 06:04:25 WARN  TaskSetManager:66 - Stage 5 contains a task of very large size (10813 KB). The maximum recommended task size is 100 KB.
2019-09-09 06:05:03 ERROR Executor:91 - Exception in task 18.0 in stage 7.0 (TID 410)
java.lang.IllegalStateException: unread block data
	at java.io.ObjectInputStream$BlockDataInputStream.setBlockDataMode(ObjectInputStream.java:2783)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1605)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
	at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
	at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:80)
	at org.apache.spark.scheduler.Task.run(Task.scala:108)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
2019-09-09 06:05:03 ERROR Executor:91 - Exception in task 19.0 in stage 7.0 (TID 411)
java.lang.OutOfMemoryError: GC overhead limit exceeded
	at java.lang.String.substring(String.java:1969)
	at sun.reflect.misc.ReflectUtil.isNonPublicProxyClass(ReflectUtil.java:288)
	at sun.reflect.misc.ReflectUtil.checkPackageAccess(ReflectUtil.java:165)
	at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1871)
	at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1751)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2042)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
	at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
	at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:80)
	at org.apache.spark.scheduler.Task.run(Task.scala:108)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
2019-09-09 06:05:03 ERROR SparkUncaughtExceptionHandler:91 - Uncaught exception in thread Thread[Executor task launch worker for task 411,5,main]
java.lang.OutOfMemoryError: GC overhead limit exceeded
	at java.lang.String.substring(String.java:1969)
	at sun.reflect.misc.ReflectUtil.isNonPublicProxyClass(ReflectUtil.java:288)
	at sun.reflect.misc.ReflectUtil.checkPackageAccess(ReflectUtil.java:165)
	at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1871)
	at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1751)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2042)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
	at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
	at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:80)
	at org.apache.spark.scheduler.Task.run(Task.scala:108)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
2019-09-09 06:05:03 ERROR LiveListenerBus:70 - SparkListenerBus has already stopped! Dropping event SparkListenerStageCompleted(org.apache.spark.scheduler.StageInfo@a5c6fe1)
2019-09-09 06:05:03 ERROR LiveListenerBus:70 - SparkListenerBus has already stopped! Dropping event SparkListenerJobEnd(3,1568034303137,JobFailed(org.apache.spark.SparkException: Job 3 cancelled because SparkContext was shut down))
2019-09-09 06:05:03 WARN  TaskSetManager:66 - Lost task 19.0 in stage 7.0 (TID 411, localhost, executor driver): java.lang.OutOfMemoryError: GC overhead limit exceeded
	at java.lang.String.substring(String.java:1969)
	at sun.reflect.misc.ReflectUtil.isNonPublicProxyClass(ReflectUtil.java:288)
	at sun.reflect.misc.ReflectUtil.checkPackageAccess(ReflectUtil.java:165)
	at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1871)
	at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1751)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2042)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
	at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
	at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:80)
	at org.apache.spark.scheduler.Task.run(Task.scala:108)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

2019-09-09 06:05:03 ERROR TaskSetManager:70 - Task 19 in stage 7.0 failed 1 times; aborting job
2019-09-09 06:05:03 WARN  NettyRpcEnv:66 - RpcEnv already stopped.
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping avocado: A Variant Caller, Distributed
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] avocado: A Variant Caller, Distributed ............. SUCCESS [  4.380 s]
[INFO] avocado-core: Core variant calling algorithms ...... FAILURE [49:06 min]
[INFO] avocado-cli: Command line interface for a distributed variant caller SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 49:10 min
[INFO] Finished at: 2019-09-09T06:05:04-07:00
[INFO] Final Memory: 33M/1068M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.scalatest:scalatest-maven-plugin:1.0:test (test) on project avocado-core_2.11: There are test failures -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :avocado-core_2.11
Build step 'Execute shell' marked build as failure
Recording test results
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files were found. Configuration error?
Sending e-mails to: fnothaft@berkeley.edu
Finished: FAILURE