SuccessConsole Output

Skipping 1,238 KB.. Full Log
asetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/VariantsToReadsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SequencesToGenotypesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/GenotypesToSlicesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SequencesToAlignmentsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/ReadsToSequencesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/VariantContextsToVariantsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SlicesToVariantsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SlicesToAlignmentsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/VariantContextsToFragmentsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/JavaADAMContext$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SlicesToReadsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SlicesToFragmentsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/ToAlignmentDatasetConversion.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/ReadsToSlicesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/AlignmentsToReadsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/VariantContextsToReadsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SequencesToGenotypesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/MergeShardsArgs.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformSequences$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformVariants.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformGenotypesArgs.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformAlignments$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/CountSliceKmers$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformFragments.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformAlignmentsArgs.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformFragmentsArgs.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformSlicesArgs.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformFeatures$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/CountSliceKmers.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformFeaturesArgs.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformSlices.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformFragments$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformSequences.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformAlignments.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformGenotypes$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformGenotypes.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/CountSliceKmersArgs.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformVariants$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/CountReadKmers$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformSequencesArgs.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformFeatures.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/CountReadKmersArgs.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformSlices$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformVariantsArgs.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/repo/adam-assembly-spark3_2.12-0.33.0-SNAPSHOT-sources.jar longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/repo/adam-assembly-spark3_2.12-0.33.0-SNAPSHOT-javadoc.jar longer than 100 characters.
[INFO] Building tar: /tmp/adamTestbiNCJsI/deleteMePleaseThisIsNoLongerNeeded/adam-distribution/target/adam-distribution-spark3_2.12-0.33.0-SNAPSHOT-bin.tar.bz2
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/algorithms/smithwaterman/ longer than 100 characters.
[WARNING] Resulting tar file can only be processed successfully by GNU compatible tar commands
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/apache/parquet/avro/AvroSchemaConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/io/InterleavedFastqInputFormat.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/io/FastqInputFormat.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/io/SingleFastqInputFormat.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/io/FastqRecordReader.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rich/RichAlignment$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/util/AttributeUtils$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/util/SequenceDictionaryReader$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/util/ReferenceContigMap$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/util/ParquetFileTraversable.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/util/ReferenceContigMap.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/util/ParquetLogger$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/util/GenomeFileReader$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/util/IndexedFastaFile.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/util/FileExtensions$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/util/TwoBitFileSerializer.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/util/ReferenceContigMapSerializer.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/util/TextRddWriter$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/sql/VariantCallingAnnotations.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/sql/TranscriptEffect.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/sql/TranscriptEffect$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/sql/VariantAnnotation.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/sql/ProcessingStep$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/sql/VariantAnnotation$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/sql/VariantCallingAnnotations$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/sql/VariantContext$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/serialization/WritableSerializer.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/serialization/InputStreamWithDecoder.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/serialization/index.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/serialization/ADAMKryoRegistrator.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/serialization/AvroSerializer.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/converters/AlignmentConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/converters/VariantContextConverter$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/converters/VariantContextConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/converters/DefaultHeaderLines$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/projections/ReferenceField$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/projections/VariantAnnotationField$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/projections/ReadGroupField$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/projections/TranscriptEffectField$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/projections/SliceField$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/projections/AlignmentField$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/projections/Projection$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/projections/FeatureField$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/projections/VariantField$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/projections/SequenceField$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/projections/SampleField$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/projections/FragmentField$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/projections/Filter$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/projections/DbxrefField$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/projections/GenotypeField$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/projections/ReadField$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/projections/FieldEnumeration$SchemaVal.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/projections/OntologyTermField$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/projections/VariantCallingAnnotationsField$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/projections/FieldValue.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/projections/FieldEnumeration.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/DatasetBoundGenericGenomicDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/InnerTreeRegionJoin.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/SortedIntervalPartitionJoinWithVictims.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/feature/NarrowPeakInFormatter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/feature/BEDInFormatter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/feature/GFF3InFormatter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/feature/RDDBoundFeatureDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/feature/RDDBoundCoverageDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/feature/GTFInFormatter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/feature/BEDInFormatter$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/feature/FeatureDataset$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/feature/BEDOutFormatter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/feature/CoverageDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/feature/FeatureDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/feature/ParquetUnboundFeatureDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/feature/GFF3InFormatter$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/feature/DatasetBoundFeatureDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/feature/GTFOutFormatter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/feature/CoverageDataset$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/feature/ParquetUnboundCoverageDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/feature/NarrowPeakOutFormatter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/feature/DatasetBoundCoverageDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/feature/GFF3OutFormatter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/feature/GTFInFormatter$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/feature/NarrowPeakInFormatter$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/read/AlignmentDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/read/FASTQInFormatter$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/read/FASTQInFormatter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/read/ADAMBAMOutputFormatHeaderLess.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/read/ADAMCRAMOutputFormat.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/read/BAMInFormatter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/read/ReferencePositionPairSerializer.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/read/QualityScoreBin.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/read/RDDBoundAlignmentDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/read/ParquetUnboundReadDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/read/AnySAMInFormatterCompanion.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/read/IncorrectMDTagException.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/read/ReadDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/read/SAMInFormatter$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/read/ReadDataset$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/read/AlignmentDataset$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/read/ParquetUnboundAlignmentDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/read/AnySAMOutFormatter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/read/SingleReadBucketSerializer.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/read/AnySAMInFormatter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/read/RDDBoundReadDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/read/ADAMCRAMOutputFormatHeaderLess.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/read/ADAMBAMOutputFormat.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/read/ADAMSAMOutputFormatHeaderLess.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/read/DatasetBoundReadDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/read/DatasetBoundAlignmentDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/read/SAMInFormatter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/read/QualityScoreBin$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/read/ADAMSAMOutputFormat.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/read/BAMInFormatter$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/InnerShuffleRegionJoin.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/AvroGenomicDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/MultisampleGenomicDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/sequence/SliceDataset$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/sequence/FASTAInFormatter$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/sequence/DatasetBoundSliceDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/sequence/RDDBoundSliceDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/sequence/ParquetUnboundSequenceDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/sequence/DatasetBoundSequenceDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/sequence/FASTAInFormatter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/sequence/SequenceDataset$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/sequence/SequenceDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/sequence/ParquetUnboundSliceDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/sequence/RDDBoundSequenceDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/sequence/SliceDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/DatasetBoundGenomicDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/RightOuterTreeRegionJoinAndGroupByRight.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/GenericGenomicDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/GenomicRegionPartitioner.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/LeftOuterShuffleRegionJoin.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/ADAMSaveAnyArgs.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/InnerTreeRegionJoinAndGroupByRight.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/GenericConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/ShuffleRegionJoin.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/GenomicBroadcast.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/GenomicDatasetWithLineage.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/RightOuterTreeRegionJoin.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/MultisampleAvroGenomicDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/LeftOuterShuffleRegionJoinAndGroupByLeft.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/ReferencePartitioner.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/InFormatterCompanion.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/GenomicPositionPartitioner.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/InnerShuffleRegionJoinAndGroupByLeft.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/AvroReadGroupGenomicDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/VictimlessSortedIntervalPartitionJoin.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/variant/DatasetBoundGenotypeDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/variant/RDDBoundGenotypeDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/variant/DatasetBoundVariantContextDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/variant/DatasetBoundVariantDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/variant/RDDBoundVariantContextDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/variant/VariantContextDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/variant/ADAMVCFOutputFormat.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/variant/ParquetUnboundGenotypeDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/variant/VariantDataset$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/variant/GenotypeDataset$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/variant/VariantContextDataset$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/variant/VCFOutFormatter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/variant/ParquetUnboundVariantDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/variant/VCFInFormatter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/variant/GenotypeDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/variant/VariantDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/variant/VCFInFormatter$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/variant/RDDBoundVariantDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/GenomicDatasetConversion.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/fragment/Tab6InFormatter$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/fragment/ParquetUnboundFragmentDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/fragment/FragmentDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/fragment/FragmentDataset$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/fragment/Tab5InFormatter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/fragment/InterleavedFASTQInFormatter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/fragment/Tab5InFormatter$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/fragment/InterleavedFASTQInFormatter$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/fragment/RDDBoundFragmentDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/fragment/Tab6InFormatter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/fragment/DatasetBoundFragmentDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/FullOuterShuffleRegionJoin.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/GenomicPositionPartitioner$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/RightOuterShuffleRegionJoinAndGroupByLeft.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/GenomicRegionPartitioner$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/rdd/RDDBoundGenericGenomicDataset.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/models/OptionalPositionOrdering$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/models/ReferencePosition.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/models/VariantContext.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/models/ReferenceOrdering.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/models/SequenceDictionary$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/models/ReferenceRegion$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/models/OptionalReferenceOrdering.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/models/SequenceRecord$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/models/ReadGroupDictionary$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/models/ReadGroupDictionary.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/models/PositionOrdering$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/models/RegionOrdering$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/models/TagType$$TypeVal.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/models/SequenceDictionary.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/models/SequenceRecord.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/models/ReferenceRegion.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/models/ReferencePosition$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/models/OptionalRegionOrdering$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/models/VariantContextSerializer.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/models/ReferencePositionSerializer.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/models/ReferenceRegionSerializer.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/models/VariantContext$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/algorithms/smithwaterman/SmithWatermanConstantGapScoring.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/algorithms/smithwaterman/index.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/algorithms/smithwaterman/SmithWaterman.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/algorithms/consensus/ConsensusGenerator.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/algorithms/consensus/index.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/algorithms/consensus/ConsensusGenerator$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/javadocs/org/apache/parquet/avro/AvroSchemaConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/javadocs/org/apache/parquet/avro/class-use/AvroSchemaConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/javadocs/org/bdgenomics/adam/io/InterleavedFastqInputFormat.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/javadocs/org/bdgenomics/adam/io/class-use/InterleavedFastqInputFormat.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/javadocs/org/bdgenomics/adam/io/class-use/FastqInputFormat.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/javadocs/org/bdgenomics/adam/io/class-use/SingleFastqInputFormat.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/javadocs/org/bdgenomics/adam/io/class-use/FastqRecordReader.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/javadocs/org/bdgenomics/adam/io/SingleFastqInputFormat.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/javadocs/org/bdgenomics/adam/io/FastqRecordReader.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/python/DataFrameConversionWrapper.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SequencesToReadsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/VariantsToGenotypesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FeaturesToAlignmentsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/VariantsToCoverageConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/AlignmentsToAlignmentsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/ReadsToSequencesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FragmentsToSequencesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/ReadsToCoverageConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/VariantsToCoverageDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/GenotypesToSequencesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/CoverageToFeaturesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/AlignmentsToSequencesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SequencesToFeaturesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/ReadsToCoverageDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/VariantsToReadsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/AlignmentsToFeaturesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SlicesToGenotypesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FeaturesToSequencesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/GenotypesToCoverageConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/VariantsToAlignmentsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/ToVariantDatasetConversion.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/GenotypesToAlignmentsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FeaturesToFragmentsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FeaturesToCoverageConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SequencesToFeaturesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/ReadsToAlignmentsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/ReadsToVariantsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/GenotypesToVariantsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SlicesToReadsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/GenotypesToGenotypesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FragmentsToFeaturesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/CoverageToSequencesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SlicesToAlignmentsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SequencesToSlicesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/CoverageToVariantsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/CoverageToSlicesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/VariantContextsToFeaturesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/CoverageToGenotypesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/VariantContextsToSlicesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/AlignmentsToGenotypesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SequencesToFragmentsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/ToFragmentDatasetConversion.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SlicesToGenotypesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/ReadsToReadsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FeaturesToVariantContextConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/ToVariantContextDatasetConversion.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FragmentsToSequencesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/VariantsToSequencesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/CoverageToGenotypesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/AlignmentsToVariantsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/VariantsToSlicesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/VariantContextsToGenotypesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/CoverageToAlignmentsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/ToGenotypeDatasetConversion.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/ToCoverageDatasetConversion.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/VariantContextsToSequencesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/CoverageToAlignmentsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/GenotypesToFeaturesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/VariantsToGenotypesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FragmentsToAlignmentsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/AlignmentsToFragmentsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SlicesToSequencesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FragmentsToGenotypesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FeaturesToAlignmentsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/ToSliceDatasetConversion.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/ReadsToVariantsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FeaturesToVariantsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/CoverageToCoverageConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FeaturesToSequencesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/GenotypesToFeaturesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FeaturesToFeatureConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FragmentsToAlignmentsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/AlignmentsToCoverageConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FeaturesToSlicesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/AlignmentsToSequencesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SequencesToCoverageConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/AlignmentsToFragmentsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/VariantsToFragmentsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/AlignmentsToVariantContextConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/CoverageToFragmentsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/VariantContextsToCoverageConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/VariantContextsToAlignmentsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/VariantsToSlicesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/VariantsToFragmentsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SequencesToVariantsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/GenotypesToVariantsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FragmentsToVariantContextConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/CoverageToReadsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FragmentsToGenotypesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FeaturesToCoverageDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SlicesToSlicesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FragmentsToSlicesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SlicesToCoverageConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/GenotypesToSlicesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/GenotypesToSequencesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/CoverageToVariantContextConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/ReadsToFeaturesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/AlignmentsToSlicesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/VariantsToVariantsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/CoverageToFragmentsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SequencesToVariantsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/CoverageToFeaturesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FragmentsToSlicesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SequencesToSequencesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SlicesToVariantContextsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FragmentsToCoverageConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/GenotypesToVariantContextConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/JavaADAMContext.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FeaturesToVariantsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/VariantsToSequencesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/CoverageToReadsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/ReadsToSlicesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/AlignmentsToGenotypesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FragmentsToReadsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/GenotypesToFragmentsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SequencesToVariantContextsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/AlignmentsToFeaturesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SequencesToAlignmentsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FragmentsToVariantsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SlicesToCoverageDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/GenotypesToCoverageDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SequencesToCoverageDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/CoverageToSequencesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/VariantsToVariantContextConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FeaturesToReadsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SlicesToSequencesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SlicesToVariantsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/AlignmentsToCoverageDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/ReadsToFragmentsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/VariantsToFeaturesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/CoverageToSlicesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/VariantContextsToVariantContextConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/ReadsToVariantContextsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/GenotypesToReadsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SlicesToFeaturesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/AlignmentsToSlicesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/GenotypesToReadsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/ToFeatureDatasetConversion.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/ToSequenceDatasetConversion.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/ToReadDatasetConversion.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/GenotypesToFragmentsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SlicesToFeaturesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FeaturesToReadsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FragmentsToFeaturesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SequencesToSlicesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/CoverageToVariantsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FeaturesToGenotypesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FeaturesToSlicesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/AlignmentsToVariantsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SequencesToFragmentsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FragmentsToVariantsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FragmentsToFragmentsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FeaturesToGenotypesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/VariantsToFeaturesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/AlignmentsToReadsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/ReadsToGenotypesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SlicesToFragmentsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/GenotypesToAlignmentsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FragmentsToReadsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/ReadsToFeaturesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/VariantsToAlignmentsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/ReadsToFragmentsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/ReadsToGenotypesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FeaturesToFragmentsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SequencesToReadsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/ReadsToAlignmentsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/FragmentsToCoverageDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/VariantsToReadsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SequencesToGenotypesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/GenotypesToSlicesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SequencesToAlignmentsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/ReadsToSequencesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/VariantContextsToVariantsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SlicesToVariantsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SlicesToAlignmentsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/VariantContextsToFragmentsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/JavaADAMContext$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SlicesToReadsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SlicesToFragmentsDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/ToAlignmentDatasetConversion.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/ReadsToSlicesDatasetConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/AlignmentsToReadsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/VariantContextsToReadsConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/api/java/SequencesToGenotypesConverter.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/MergeShardsArgs.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformSequences$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformVariants.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformGenotypesArgs.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformAlignments$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/CountSliceKmers$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformFragments.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformAlignmentsArgs.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformFragmentsArgs.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformSlicesArgs.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformFeatures$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/CountSliceKmers.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformFeaturesArgs.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformSlices.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformFragments$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformSequences.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformAlignments.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformGenotypes$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformGenotypes.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/CountSliceKmersArgs.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformVariants$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/CountReadKmers$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformSequencesArgs.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformFeatures.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/CountReadKmersArgs.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformSlices$.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/scaladocs/org/bdgenomics/adam/cli/TransformVariantsArgs.html longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/repo/adam-assembly-spark3_2.12-0.33.0-SNAPSHOT-sources.jar longer than 100 characters.
[WARNING] Entry: adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/repo/adam-assembly-spark3_2.12-0.33.0-SNAPSHOT-javadoc.jar longer than 100 characters.
[INFO] Building zip: /tmp/adamTestbiNCJsI/deleteMePleaseThisIsNoLongerNeeded/adam-distribution/target/adam-distribution-spark3_2.12-0.33.0-SNAPSHOT-bin.zip
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for ADAM_2.12 0.33.0-SNAPSHOT:
[INFO] 
[INFO] ADAM_2.12 .......................................... SUCCESS [  8.230 s]
[INFO] ADAM_2.12: Shader workaround ....................... SUCCESS [  7.156 s]
[INFO] ADAM_2.12: Avro-to-Dataset codegen utils ........... SUCCESS [  9.121 s]
[INFO] ADAM_2.12: Core .................................... SUCCESS [01:30 min]
[INFO] ADAM_2.12: APIs for Java, Python ................... SUCCESS [ 22.154 s]
[INFO] ADAM_2.12: CLI ..................................... SUCCESS [ 26.722 s]
[INFO] ADAM_2.12: Assembly ................................ SUCCESS [ 14.418 s]
[INFO] ADAM_2.12: Python APIs ............................. SUCCESS [01:16 min]
[INFO] ADAM_2.12: Distribution ............................ SUCCESS [ 42.329 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  04:56 min
[INFO] Finished at: 2020-10-19T22:09:06-07:00
[INFO] ------------------------------------------------------------------------
+ tar tzvf adam-distribution/target/adam-distribution-spark3_2.12-0.33.0-SNAPSHOT-bin.tar.gz
+ grep bdgenomics.adam
+ grep egg
drwxrwxr-x jenkins/jenkins        0 2020-10-19 22:07 adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/r/bdgenomics.adam.egg-info/
-rw-r--r-- jenkins/jenkins 41129486 2020-10-19 22:07 adam-distribution-spark3_2.12-0.33.0-SNAPSHOT/repo/bdgenomics.adam-0.32.0a0-py3.6.egg
+ ./bin/pyadam
Using PYSPARK=/tmp/adamTestbiNCJsI/deleteMePleaseThisIsNoLongerNeeded/spark-3.0.1-bin-hadoop2.7/bin/pyspark
2020-10-19 22:09:08 WARN  Utils:69 - Your hostname, ubuntu-testing resolves to a loopback address: 127.0.1.1; using 192.168.10.30 instead (on interface eno1)
2020-10-19 22:09:08 WARN  Utils:69 - Set SPARK_LOCAL_IP if you need to bind to another address
2020-10-19 22:09:09 WARN  NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
2020-10-19 22:09:15 WARN  package:69 - Truncated the string representation of a plan since it was too large. This behavior can be adjusted by setting 'spark.sql.debug.maxToStringFields'.

[Stage 0:>                                                          (0 + 1) / 1]

                                                                                
+ source deactivate
#!/bin/sh
_CONDA_ROOT="/home/jenkins/anaconda2"
++ _CONDA_ROOT=/home/jenkins/anaconda2
\. "$_CONDA_ROOT/etc/profile.d/conda.sh" || return $?
++ . /home/jenkins/anaconda2/etc/profile.d/conda.sh
_CONDA_EXE="/home/jenkins/anaconda2/bin/conda"
+++ _CONDA_EXE=/home/jenkins/anaconda2/bin/conda
_CONDA_ROOT="/home/jenkins/anaconda2"
+++ _CONDA_ROOT=/home/jenkins/anaconda2
_conda_set_vars() {
    # set _CONDA_SHELL_FLAVOR
    if [ -n "${BASH_VERSION:+x}" ]; then
        _CONDA_SHELL_FLAVOR=bash
    elif [ -n "${ZSH_VERSION:+x}" ]; then
        _CONDA_SHELL_FLAVOR=zsh
    elif [ -n "${KSH_VERSION:+x}" ]; then
        _CONDA_SHELL_FLAVOR=ksh
    elif [ -n "${POSH_VERSION:+x}" ]; then
        _CONDA_SHELL_FLAVOR=posh
    else
        # default to dash; if we run into a problem here, please raise an issue
        _CONDA_SHELL_FLAVOR=dash
    fi

    if [ -z "${_CONDA_EXE+x}" ]; then
        if [ -n "${_CONDA_ROOT:+x}" ]; then
            # typically this should be for dev only; _CONDA_EXE should be written at top of file
            # for normal installs
            _CONDA_EXE="$_CONDA_ROOT/conda/shell/bin/conda"
        fi
        if ! [ -f "${_CONDA_EXE-x}" ]; then
            _CONDA_EXE="$PWD/conda/shell/bin/conda"
        fi
    fi

    # We're not allowing PS1 to be unbound. It must at least be set.
    # However, we're not exporting it, which can cause problems when starting a second shell
    # via a first shell (i.e. starting zsh from bash).
    if [ -z "${PS1+x}" ]; then
        PS1=
    fi

}


_conda_hashr() {
    case "$_CONDA_SHELL_FLAVOR" in
        zsh) \rehash;;
        posh) ;;
        *) \hash -r;;
    esac
}


_conda_activate() {
    if [ -n "${CONDA_PS1_BACKUP:+x}" ]; then
        # Handle transition from shell activated with conda <= 4.3 to a subsequent activation
        # after conda updated to >= 4.4. See issue #6173.
        PS1="$CONDA_PS1_BACKUP"
        \unset CONDA_PS1_BACKUP
    fi

    \local ask_conda
    ask_conda="$(PS1="$PS1" $_CONDA_EXE shell.posix activate "$@")" || \return $?
    \eval "$ask_conda"

    _conda_hashr
}

_conda_deactivate() {
    \local ask_conda
    ask_conda="$(PS1="$PS1" $_CONDA_EXE shell.posix deactivate "$@")" || \return $?
    \eval "$ask_conda"

    _conda_hashr
}

_conda_reactivate() {
    \local ask_conda
    ask_conda="$(PS1="$PS1" $_CONDA_EXE shell.posix reactivate)" || \return $?
    \eval "$ask_conda"

    _conda_hashr
}


conda() {
    if [ "$#" -lt 1 ]; then
        $_CONDA_EXE
    else
        \local cmd="$1"
        shift
        case "$cmd" in
            activate)
                _conda_activate "$@"
                ;;
            deactivate)
                _conda_deactivate "$@"
                ;;
            install|update|uninstall|remove)
                $_CONDA_EXE "$cmd" "$@" && _conda_reactivate
                ;;
            *)
                $_CONDA_EXE "$cmd" "$@"
                ;;
        esac
    fi
}


_conda_set_vars
+++ _conda_set_vars
+++ '[' -n x ']'
+++ _CONDA_SHELL_FLAVOR=bash
+++ '[' -z x ']'
+++ '[' -z x ']'

if [ -z "${CONDA_SHLVL+x}" ]; then
    \export CONDA_SHLVL=0
fi
+++ '[' -z x ']'

_conda_deactivate
++ _conda_deactivate
++ local ask_conda
PS1="$PS1" $_CONDA_EXE shell.posix deactivate "$@"
+++ PS1='(adam-build-1f01ded5-ddc1-47c1-ac8b-c3b411aa17e4) '
+++ /home/jenkins/anaconda2/bin/conda shell.posix deactivate
++ ask_conda='\unset CONDA_DEFAULT_ENV
\unset CONDA_EXE
\unset CONDA_PREFIX
\unset CONDA_PROMPT_MODIFIER
\unset CONDA_PYTHON_EXE
PS1='\'''\''
\export CONDA_SHLVL='\''0'\''
\export PATH='\''/usr/lib/jvm/java-8-oracle/bin/:/usr/lib/jvm/java-8-oracle/bin/:/home/anaconda/bin/:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games'\'''
++ eval '\unset CONDA_DEFAULT_ENV
\unset CONDA_EXE
\unset CONDA_PREFIX
\unset CONDA_PROMPT_MODIFIER
\unset CONDA_PYTHON_EXE
PS1='\'''\''
\export CONDA_SHLVL='\''0'\''
\export PATH='\''/usr/lib/jvm/java-8-oracle/bin/:/usr/lib/jvm/java-8-oracle/bin/:/home/anaconda/bin/:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games'\'''
\unset CONDA_DEFAULT_ENV
+++ unset CONDA_DEFAULT_ENV
\unset CONDA_EXE
+++ unset CONDA_EXE
\unset CONDA_PREFIX
+++ unset CONDA_PREFIX
\unset CONDA_PROMPT_MODIFIER
+++ unset CONDA_PROMPT_MODIFIER
\unset CONDA_PYTHON_EXE
+++ unset CONDA_PYTHON_EXE
PS1=''
+++ PS1=
\export CONDA_SHLVL='0'
+++ export CONDA_SHLVL=0
+++ CONDA_SHLVL=0
\export PATH='/usr/lib/jvm/java-8-oracle/bin/:/usr/lib/jvm/java-8-oracle/bin/:/home/anaconda/bin/:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games'
+++ export PATH=/usr/lib/jvm/java-8-oracle/bin/:/usr/lib/jvm/java-8-oracle/bin/:/home/anaconda/bin/:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
+++ PATH=/usr/lib/jvm/java-8-oracle/bin/:/usr/lib/jvm/java-8-oracle/bin/:/home/anaconda/bin/:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
++ _conda_hashr
++ case "$_CONDA_SHELL_FLAVOR" in
++ hash -r
+ conda remove -y -n adam-build-1f01ded5-ddc1-47c1-ac8b-c3b411aa17e4 --all
+ '[' 5 -lt 1 ']'
+ local cmd=remove
+ shift
+ case "$cmd" in
+ /home/jenkins/anaconda2/bin/conda remove -y -n adam-build-1f01ded5-ddc1-47c1-ac8b-c3b411aa17e4 --all

Remove all packages in environment /home/jenkins/anaconda2/envs/adam-build-1f01ded5-ddc1-47c1-ac8b-c3b411aa17e4:


## Package Plan ##

  environment location: /home/jenkins/anaconda2/envs/adam-build-1f01ded5-ddc1-47c1-ac8b-c3b411aa17e4


The following packages will be REMOVED:

    _libgcc_mutex:    0.1-main               
    ca-certificates:  2020.10.14-0           
    certifi:          2020.6.20-py36_0       
    ld_impl_linux-64: 2.33.1-h53a641e_7      
    libedit:          3.1.20191231-h14c3975_1
    libffi:           3.3-he6710b0_2         
    libgcc-ng:        9.1.0-hdf63c60_0       
    libstdcxx-ng:     9.1.0-hdf63c60_0       
    ncurses:          6.2-he6710b0_1         
    openssl:          1.1.1h-h7b6447c_0      
    pip:              20.2.3-py36_0          
    python:           3.6.12-hcff3b4d_2      
    readline:         8.0-h7b6447c_0         
    setuptools:       50.3.0-py36hb0f4dca_1  
    sqlite:           3.33.0-h62c20be_0      
    tk:               8.6.10-hbc83047_0      
    wheel:            0.35.1-py_0            
    xz:               5.2.5-h7b6447c_0       
    zlib:             1.2.11-h7b6447c_3      

+ _conda_reactivate
+ local ask_conda
PS1="$PS1" $_CONDA_EXE shell.posix reactivate
++ PS1=
++ /home/jenkins/anaconda2/bin/conda shell.posix reactivate
+ ask_conda=
+ eval ''
+ _conda_hashr
+ case "$_CONDA_SHELL_FLAVOR" in
+ hash -r
+ cp -r adam-python/target /home/jenkins/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALA_VERSION/2.12/SPARK_VERSION/3.0.1/label/ubuntu/scripts/../adam-python/
+ pushd adam-python
/tmp/adamTestbiNCJsI/deleteMePleaseThisIsNoLongerNeeded/adam-python /tmp/adamTestbiNCJsI/deleteMePleaseThisIsNoLongerNeeded ~/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALA_VERSION/2.12/SPARK_VERSION/3.0.1/label/ubuntu
+ make clean
pip uninstall -y adam
DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. A future version of pip will drop support for Python 2.7. More details about Python 2 support in pip, can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support
WARNING: Skipping adam as it is not installed.
rm -rf bdgenomics/*.egg*
rm -rf build/
rm -rf dist/
+ make clean_sdist
rm -rf dist
+ popd
/tmp/adamTestbiNCJsI/deleteMePleaseThisIsNoLongerNeeded ~/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALA_VERSION/2.12/SPARK_VERSION/3.0.1/label/ubuntu

if [ ${SPARK_VERSION} == 3.0.1 ]
then
    echo "Unable to build R support for Spark 3.0.1, SparkR is not available"
else
    # make a directory to install SparkR into, and set the R user libs path
    export R_LIBS_USER=${SPARK_HOME}/local_R_libs
    mkdir -p ${R_LIBS_USER}
    R CMD INSTALL \
      -l ${R_LIBS_USER} \
      ${SPARK_HOME}/R/lib/SparkR/

    export SPARKR_SUBMIT_ARGS="--jars ${ASSEMBLY_DIR}/${ASSEMBLY_JAR} --driver-class-path ${ASSEMBLY_DIR}/${ASSEMBLY_JAR} sparkr-shell"

    mvn -U \
    	-P r \
    	package \
    	-Dsuites=select.no.suites\* \
    	-Dhadoop.version=${HADOOP_VERSION}
fi
+ '[' 3.0.1 == 3.0.1 ']'
+ echo 'Unable to build R support for Spark 3.0.1, SparkR is not available'
Unable to build R support for Spark 3.0.1, SparkR is not available

# define filenames
BAM=mouse_chrM.bam
+ BAM=mouse_chrM.bam
READS=${BAM}.reads.adam
+ READS=mouse_chrM.bam.reads.adam
SORTED_READS=${BAM}.reads.sorted.adam
+ SORTED_READS=mouse_chrM.bam.reads.sorted.adam
FRAGMENTS=${BAM}.fragments.adam
+ FRAGMENTS=mouse_chrM.bam.fragments.adam
    
# fetch our input dataset
echo "Fetching BAM file"
+ echo 'Fetching BAM file'
Fetching BAM file
rm -rf ${BAM}
+ rm -rf mouse_chrM.bam
wget -q https://s3.amazonaws.com/bdgenomics-test/${BAM}
+ wget -q https://s3.amazonaws.com/bdgenomics-test/mouse_chrM.bam

# once fetched, convert BAM to ADAM
echo "Converting BAM to ADAM read format"
+ echo 'Converting BAM to ADAM read format'
Converting BAM to ADAM read format
rm -rf ${READS}
+ rm -rf mouse_chrM.bam.reads.adam
${ADAM} transformAlignments ${BAM} ${READS}
+ ./bin/adam-submit transformAlignments mouse_chrM.bam mouse_chrM.bam.reads.adam
Using ADAM_MAIN=org.bdgenomics.adam.cli.ADAMMain
Using spark-submit=/tmp/adamTestbiNCJsI/deleteMePleaseThisIsNoLongerNeeded/spark-3.0.1-bin-hadoop2.7/bin/spark-submit
20/10/19 22:09:31 WARN Utils: Your hostname, ubuntu-testing resolves to a loopback address: 127.0.1.1; using 192.168.10.30 instead (on interface eno1)
20/10/19 22:09:31 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
20/10/19 22:09:31 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
log4j:WARN No appenders could be found for logger (org.bdgenomics.adam.cli.ADAMMain).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
20/10/19 22:09:31 INFO SparkContext: Running Spark version 3.0.1
20/10/19 22:09:32 INFO ResourceUtils: ==============================================================
20/10/19 22:09:32 INFO ResourceUtils: Resources for spark.driver:

20/10/19 22:09:32 INFO ResourceUtils: ==============================================================
20/10/19 22:09:32 INFO SparkContext: Submitted application: transformAlignments
20/10/19 22:09:32 INFO SecurityManager: Changing view acls to: jenkins
20/10/19 22:09:32 INFO SecurityManager: Changing modify acls to: jenkins
20/10/19 22:09:32 INFO SecurityManager: Changing view acls groups to: 
20/10/19 22:09:32 INFO SecurityManager: Changing modify acls groups to: 
20/10/19 22:09:32 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(jenkins); groups with view permissions: Set(); users  with modify permissions: Set(jenkins); groups with modify permissions: Set()
20/10/19 22:09:32 INFO Utils: Successfully started service 'sparkDriver' on port 38267.
20/10/19 22:09:32 INFO SparkEnv: Registering MapOutputTracker
20/10/19 22:09:32 INFO SparkEnv: Registering BlockManagerMaster
20/10/19 22:09:32 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20/10/19 22:09:32 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
20/10/19 22:09:32 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
20/10/19 22:09:32 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-95c6a036-6090-48a2-9fca-49fd47a73474
20/10/19 22:09:32 INFO MemoryStore: MemoryStore started with capacity 408.9 MiB
20/10/19 22:09:32 INFO SparkEnv: Registering OutputCommitCoordinator
20/10/19 22:09:32 INFO Utils: Successfully started service 'SparkUI' on port 4040.
20/10/19 22:09:32 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.10.30:4040
20/10/19 22:09:32 INFO SparkContext: Added JAR file:/tmp/adamTestbiNCJsI/deleteMePleaseThisIsNoLongerNeeded/adam-assembly/target/adam-assembly-spark3_2.12-0.33.0-SNAPSHOT.jar at spark://192.168.10.30:38267/jars/adam-assembly-spark3_2.12-0.33.0-SNAPSHOT.jar with timestamp 1603170572997
20/10/19 22:09:33 INFO Executor: Starting executor ID driver on host 192.168.10.30
20/10/19 22:09:33 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 33729.
20/10/19 22:09:33 INFO NettyBlockTransferService: Server created on 192.168.10.30:33729
20/10/19 22:09:33 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
20/10/19 22:09:33 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.10.30, 33729, None)
20/10/19 22:09:33 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.10.30:33729 with 408.9 MiB RAM, BlockManagerId(driver, 192.168.10.30, 33729, None)
20/10/19 22:09:33 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.10.30, 33729, None)
20/10/19 22:09:33 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.10.30, 33729, None)
20/10/19 22:09:33 INFO ADAMContext: Loading mouse_chrM.bam as BAM/CRAM/SAM and converting to Alignments.
20/10/19 22:09:33 INFO ADAMContext: Loaded header from file:/tmp/adamTestbiNCJsI/deleteMePleaseThisIsNoLongerNeeded/mouse_chrM.bam
20/10/19 22:09:34 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 289.9 KiB, free 408.6 MiB)
20/10/19 22:09:34 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 23.6 KiB, free 408.6 MiB)
20/10/19 22:09:34 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.10.30:33729 (size: 23.6 KiB, free: 408.9 MiB)
20/10/19 22:09:34 INFO SparkContext: Created broadcast 0 from newAPIHadoopFile at ADAMContext.scala:2054
20/10/19 22:09:36 INFO RDDBoundAlignmentDataset: Saving data in ADAM format
20/10/19 22:09:36 INFO FileOutputCommitter: File Output Committer Algorithm version is 1
20/10/19 22:09:36 INFO FileInputFormat: Total input paths to process : 1
20/10/19 22:09:36 INFO SparkContext: Starting job: runJob at SparkHadoopWriter.scala:78
20/10/19 22:09:36 INFO DAGScheduler: Got job 0 (runJob at SparkHadoopWriter.scala:78) with 1 output partitions
20/10/19 22:09:36 INFO DAGScheduler: Final stage: ResultStage 0 (runJob at SparkHadoopWriter.scala:78)
20/10/19 22:09:36 INFO DAGScheduler: Parents of final stage: List()
20/10/19 22:09:36 INFO DAGScheduler: Missing parents: List()
20/10/19 22:09:36 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[2] at map at GenomicDataset.scala:3805), which has no missing parents
20/10/19 22:09:36 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 84.5 KiB, free 408.5 MiB)
20/10/19 22:09:36 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 31.3 KiB, free 408.5 MiB)
20/10/19 22:09:36 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.10.30:33729 (size: 31.3 KiB, free: 408.8 MiB)
20/10/19 22:09:36 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1223
20/10/19 22:09:36 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[2] at map at GenomicDataset.scala:3805) (first 15 tasks are for partitions Vector(0))
20/10/19 22:09:36 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
20/10/19 22:09:36 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 192.168.10.30, executor driver, partition 0, PROCESS_LOCAL, 7443 bytes)
20/10/19 22:09:36 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
20/10/19 22:09:36 INFO Executor: Fetching spark://192.168.10.30:38267/jars/adam-assembly-spark3_2.12-0.33.0-SNAPSHOT.jar with timestamp 1603170572997
20/10/19 22:09:36 INFO TransportClientFactory: Successfully created connection to /192.168.10.30:38267 after 45 ms (0 ms spent in bootstraps)
20/10/19 22:09:36 INFO Utils: Fetching spark://192.168.10.30:38267/jars/adam-assembly-spark3_2.12-0.33.0-SNAPSHOT.jar to /tmp/spark-53e81706-1d35-440b-9b38-5923cde1a695/userFiles-84841703-3b09-43d6-962b-b79cccb0cd20/fetchFileTemp9152164582828760307.tmp
20/10/19 22:09:36 INFO Executor: Adding file:/tmp/spark-53e81706-1d35-440b-9b38-5923cde1a695/userFiles-84841703-3b09-43d6-962b-b79cccb0cd20/adam-assembly-spark3_2.12-0.33.0-SNAPSHOT.jar to class loader
20/10/19 22:09:36 INFO NewHadoopRDD: Input split: file:/tmp/adamTestbiNCJsI/deleteMePleaseThisIsNoLongerNeeded/mouse_chrM.bam:83361792-833134657535
20/10/19 22:09:36 INFO FileOutputCommitter: File Output Committer Algorithm version is 1
20/10/19 22:09:36 INFO CodecConfig: Compression: GZIP
20/10/19 22:09:36 INFO FileOutputCommitter: File Output Committer Algorithm version is 1
20/10/19 22:09:37 INFO ParquetOutputFormat: Parquet block size to 134217728
20/10/19 22:09:37 INFO ParquetOutputFormat: Parquet page size to 1048576
20/10/19 22:09:37 INFO ParquetOutputFormat: Parquet dictionary page size to 1048576
20/10/19 22:09:37 INFO ParquetOutputFormat: Dictionary is on
20/10/19 22:09:37 INFO ParquetOutputFormat: Validation is off
20/10/19 22:09:37 INFO ParquetOutputFormat: Writer version is: PARQUET_1_0
20/10/19 22:09:37 INFO ParquetOutputFormat: Maximum row group padding size is 8388608 bytes
20/10/19 22:09:37 INFO ParquetOutputFormat: Page size checking is: estimated
20/10/19 22:09:37 INFO ParquetOutputFormat: Min row count for page size check is: 100
20/10/19 22:09:37 INFO ParquetOutputFormat: Max row count for page size check is: 10000
20/10/19 22:09:37 INFO CodecPool: Got brand-new compressor [.gz]
Ignoring SAM validation error: ERROR: Record 162622, Read name 613F0AAXX100423:3:58:9979:16082, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162624, Read name 613F0AAXX100423:6:13:3141:11793, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162625, Read name 613F0AAXX100423:8:39:18592:13552, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162635, Read name 613F1AAXX100423:7:2:13114:10698, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162637, Read name 613F1AAXX100423:6:100:8840:11167, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162639, Read name 613F1AAXX100423:8:15:10944:11181, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162640, Read name 613F1AAXX100423:8:17:5740:10104, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162651, Read name 613F1AAXX100423:1:53:11097:8261, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162654, Read name 613F1AAXX100423:2:112:16779:19612, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162657, Read name 613F0AAXX100423:8:28:7084:17683, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162659, Read name 613F0AAXX100423:8:39:19796:12794, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162662, Read name 613F1AAXX100423:5:116:9339:3264, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162667, Read name 613F0AAXX100423:4:67:2015:3054, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162669, Read name 613F0AAXX100423:7:7:11297:11738, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162674, Read name 613F0AAXX100423:6:59:10490:20829, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162678, Read name 613F1AAXX100423:8:11:17603:4766, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162682, Read name 613F0AAXX100423:5:86:10814:10257, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162683, Read name 613F0AAXX100423:5:117:14178:6111, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162685, Read name 613F0AAXX100423:2:3:13563:6720, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162689, Read name 613F0AAXX100423:7:59:16009:15799, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162696, Read name 613F0AAXX100423:5:31:9663:18252, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162698, Read name 613F1AAXX100423:2:27:12264:14626, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162699, Read name 613F0AAXX100423:1:120:19003:6647, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162702, Read name 613F1AAXX100423:3:37:6972:18407, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162704, Read name 613F1AAXX100423:3:77:6946:3880, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162706, Read name 613F0AAXX100423:7:48:2692:3492, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162708, Read name 613F1AAXX100423:7:80:8790:1648, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162710, Read name 6141AAAXX100423:5:30:15036:17610, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162712, Read name 613F1AAXX100423:8:80:6261:4465, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162713, Read name 6141AAAXX100423:5:74:5542:6195, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162715, Read name 613F1AAXX100423:5:14:14844:13639, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162718, Read name 613F1AAXX100423:7:112:14569:8480, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162725, Read name 613F1AAXX100423:4:56:10160:9879, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162727, Read name 6141AAAXX100423:7:89:12209:9221, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162731, Read name 6141AAAXX100423:6:55:1590:19793, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162732, Read name 6141AAAXX100423:7:102:16679:12368, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162734, Read name 613F1AAXX100423:2:7:4909:18472, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162737, Read name 6141AAAXX100423:4:73:6574:10572, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162741, Read name 6141AAAXX100423:1:8:14113:12655, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162743, Read name 6141AAAXX100423:3:40:7990:5056, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162744, Read name 6141AAAXX100423:4:36:15793:3411, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162745, Read name 6141AAAXX100423:8:83:1139:18985, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162746, Read name 6141AAAXX100423:5:7:18196:13562, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162748, Read name 6141AAAXX100423:3:114:5639:7123, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162751, Read name 6141AAAXX100423:7:47:4898:8640, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162753, Read name 6141AAAXX100423:3:64:8064:8165, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162756, Read name 613F1AAXX100423:1:105:14386:1684, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162757, Read name 613F1AAXX100423:6:98:1237:19470, MAPQ should be 0 for unmapped read.
Ignoring SAM validation error: ERROR: Record 162761, Read name 613F1AAXX100423:7:106:19658:9261, MAPQ should be 0 for unmapped read.
20/10/19 22:09:46 INFO InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 16043959
20/10/19 22:09:46 INFO FileOutputCommitter: Saved output of task 'attempt_20201019220936_0002_r_000000_0' to file:/tmp/adamTestbiNCJsI/deleteMePleaseThisIsNoLongerNeeded/mouse_chrM.bam.reads.adam/_temporary/0/task_20201019220936_0002_r_000000
20/10/19 22:09:46 INFO SparkHadoopMapRedUtil: attempt_20201019220936_0002_r_000000_0: Committed
20/10/19 22:09:46 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 978 bytes result sent to driver
20/10/19 22:09:46 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 10202 ms on 192.168.10.30 (executor driver) (1/1)
20/10/19 22:09:46 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
20/10/19 22:09:46 INFO DAGScheduler: ResultStage 0 (runJob at SparkHadoopWriter.scala:78) finished in 10.328 s
20/10/19 22:09:46 INFO DAGScheduler: Job 0 is finished. Cancelling potential speculative or zombie tasks for this job
20/10/19 22:09:46 INFO TaskSchedulerImpl: Killing all running tasks in stage 0: Stage finished
20/10/19 22:09:46 INFO DAGScheduler: Job 0 finished: runJob at SparkHadoopWriter.scala:78, took 10.387611 s
20/10/19 22:09:46 INFO ParquetFileReader: Initiating action with parallelism: 5
20/10/19 22:09:46 INFO SparkHadoopWriter: Job job_20201019220936_0002 committed.
20/10/19 22:09:46 INFO SparkContext: Invoking stop() from shutdown hook
20/10/19 22:09:46 INFO SparkUI: Stopped Spark web UI at http://192.168.10.30:4040
20/10/19 22:09:46 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
20/10/19 22:09:46 INFO MemoryStore: MemoryStore cleared
20/10/19 22:09:46 INFO BlockManager: BlockManager stopped
20/10/19 22:09:46 INFO BlockManagerMaster: BlockManagerMaster stopped
20/10/19 22:09:46 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
20/10/19 22:09:46 INFO SparkContext: Successfully stopped SparkContext
20/10/19 22:09:46 INFO ShutdownHookManager: Shutdown hook called
20/10/19 22:09:46 INFO ShutdownHookManager: Deleting directory /tmp/spark-53e81706-1d35-440b-9b38-5923cde1a695
20/10/19 22:09:46 INFO ShutdownHookManager: Deleting directory /tmp/spark-813aa65c-a93f-40d3-a10e-b238f327740f

# then, sort the BAM
echo "Converting BAM to ADAM read format with sorting"
+ echo 'Converting BAM to ADAM read format with sorting'
Converting BAM to ADAM read format with sorting
rm -rf ${SORTED_READS}
+ rm -rf mouse_chrM.bam.reads.sorted.adam
${ADAM} transformAlignments -sort_by_reference_position ${READS} ${SORTED_READS}
+ ./bin/adam-submit transformAlignments -sort_by_reference_position mouse_chrM.bam.reads.adam mouse_chrM.bam.reads.sorted.adam
Using ADAM_MAIN=org.bdgenomics.adam.cli.ADAMMain
Using spark-submit=/tmp/adamTestbiNCJsI/deleteMePleaseThisIsNoLongerNeeded/spark-3.0.1-bin-hadoop2.7/bin/spark-submit
20/10/19 22:09:48 WARN Utils: Your hostname, ubuntu-testing resolves to a loopback address: 127.0.1.1; using 192.168.10.30 instead (on interface eno1)
20/10/19 22:09:48 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
20/10/19 22:09:48 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
log4j:WARN No appenders could be found for logger (org.bdgenomics.adam.cli.ADAMMain).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
20/10/19 22:09:49 INFO SparkContext: Running Spark version 3.0.1
20/10/19 22:09:49 INFO ResourceUtils: ==============================================================
20/10/19 22:09:49 INFO ResourceUtils: Resources for spark.driver:

20/10/19 22:09:49 INFO ResourceUtils: ==============================================================
20/10/19 22:09:49 INFO SparkContext: Submitted application: transformAlignments
20/10/19 22:09:49 INFO SecurityManager: Changing view acls to: jenkins
20/10/19 22:09:49 INFO SecurityManager: Changing modify acls to: jenkins
20/10/19 22:09:49 INFO SecurityManager: Changing view acls groups to: 
20/10/19 22:09:49 INFO SecurityManager: Changing modify acls groups to: 
20/10/19 22:09:49 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(jenkins); groups with view permissions: Set(); users  with modify permissions: Set(jenkins); groups with modify permissions: Set()
20/10/19 22:09:49 INFO Utils: Successfully started service 'sparkDriver' on port 41493.
20/10/19 22:09:49 INFO SparkEnv: Registering MapOutputTracker
20/10/19 22:09:49 INFO SparkEnv: Registering BlockManagerMaster
20/10/19 22:09:49 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20/10/19 22:09:49 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
20/10/19 22:09:49 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
20/10/19 22:09:49 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-3f3269b4-aad4-4ffa-839e-e9ff19d80266
20/10/19 22:09:49 INFO MemoryStore: MemoryStore started with capacity 408.9 MiB
20/10/19 22:09:49 INFO SparkEnv: Registering OutputCommitCoordinator
20/10/19 22:09:49 INFO Utils: Successfully started service 'SparkUI' on port 4040.
20/10/19 22:09:49 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.10.30:4040
20/10/19 22:09:49 INFO SparkContext: Added JAR file:/tmp/adamTestbiNCJsI/deleteMePleaseThisIsNoLongerNeeded/adam-assembly/target/adam-assembly-spark3_2.12-0.33.0-SNAPSHOT.jar at spark://192.168.10.30:41493/jars/adam-assembly-spark3_2.12-0.33.0-SNAPSHOT.jar with timestamp 1603170589857
20/10/19 22:09:50 INFO Executor: Starting executor ID driver on host 192.168.10.30
20/10/19 22:09:50 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 36157.
20/10/19 22:09:50 INFO NettyBlockTransferService: Server created on 192.168.10.30:36157
20/10/19 22:09:50 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
20/10/19 22:09:50 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.10.30, 36157, None)
20/10/19 22:09:50 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.10.30:36157 with 408.9 MiB RAM, BlockManagerId(driver, 192.168.10.30, 36157, None)
20/10/19 22:09:50 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.10.30, 36157, None)
20/10/19 22:09:50 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.10.30, 36157, None)
20/10/19 22:09:50 INFO ADAMContext: Loading mouse_chrM.bam.reads.adam as Parquet of Alignments.
20/10/19 22:09:51 INFO ADAMContext: Reading the ADAM file at mouse_chrM.bam.reads.adam to create RDD
20/10/19 22:09:52 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 309.3 KiB, free 408.6 MiB)
20/10/19 22:09:52 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 27.9 KiB, free 408.6 MiB)
20/10/19 22:09:52 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.10.30:36157 (size: 27.9 KiB, free: 408.9 MiB)
20/10/19 22:09:52 INFO SparkContext: Created broadcast 0 from newAPIHadoopFile at ADAMContext.scala:1797
20/10/19 22:09:52 INFO TransformAlignments: Sorting alignments by reference position, with references ordered by name
20/10/19 22:09:52 INFO RDDBoundAlignmentDataset: Sorting alignments by reference position
20/10/19 22:09:52 INFO FileInputFormat: Total input paths to process : 1
20/10/19 22:09:52 INFO ParquetInputFormat: Total input paths to process : 1
20/10/19 22:09:52 INFO RDDBoundAlignmentDataset: Saving data in ADAM format
20/10/19 22:09:52 INFO FileOutputCommitter: File Output Committer Algorithm version is 1
20/10/19 22:09:52 INFO SparkContext: Starting job: runJob at SparkHadoopWriter.scala:78
20/10/19 22:09:52 INFO DAGScheduler: Registering RDD 2 (sortBy at AlignmentDataset.scala:1004) as input to shuffle 0
20/10/19 22:09:52 INFO DAGScheduler: Got job 0 (runJob at SparkHadoopWriter.scala:78) with 1 output partitions
20/10/19 22:09:52 INFO DAGScheduler: Final stage: ResultStage 1 (runJob at SparkHadoopWriter.scala:78)
20/10/19 22:09:52 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 0)
20/10/19 22:09:52 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 0)
20/10/19 22:09:52 INFO DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[2] at sortBy at AlignmentDataset.scala:1004), which has no missing parents
20/10/19 22:09:53 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 6.9 KiB, free 408.6 MiB)
20/10/19 22:09:53 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 4.0 KiB, free 408.6 MiB)
20/10/19 22:09:53 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.10.30:36157 (size: 4.0 KiB, free: 408.9 MiB)
20/10/19 22:09:53 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1223
20/10/19 22:09:53 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[2] at sortBy at AlignmentDataset.scala:1004) (first 15 tasks are for partitions Vector(0))
20/10/19 22:09:53 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
20/10/19 22:09:53 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 192.168.10.30, executor driver, partition 0, PROCESS_LOCAL, 7482 bytes)
20/10/19 22:09:53 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
20/10/19 22:09:53 INFO Executor: Fetching spark://192.168.10.30:41493/jars/adam-assembly-spark3_2.12-0.33.0-SNAPSHOT.jar with timestamp 1603170589857
20/10/19 22:09:53 INFO TransportClientFactory: Successfully created connection to /192.168.10.30:41493 after 45 ms (0 ms spent in bootstraps)
20/10/19 22:09:53 INFO Utils: Fetching spark://192.168.10.30:41493/jars/adam-assembly-spark3_2.12-0.33.0-SNAPSHOT.jar to /tmp/spark-ab2ae25d-1669-4d6e-be15-af4dce3fd3a2/userFiles-6cd2c8a8-3979-4ee6-b29d-0d3e312e8630/fetchFileTemp3637945600473362896.tmp
20/10/19 22:09:53 INFO Executor: Adding file:/tmp/spark-ab2ae25d-1669-4d6e-be15-af4dce3fd3a2/userFiles-6cd2c8a8-3979-4ee6-b29d-0d3e312e8630/adam-assembly-spark3_2.12-0.33.0-SNAPSHOT.jar to class loader
20/10/19 22:09:53 INFO NewHadoopRDD: Input split: file:/tmp/adamTestbiNCJsI/deleteMePleaseThisIsNoLongerNeeded/mouse_chrM.bam.reads.adam/part-r-00000.gz.parquet:0+10132211
20/10/19 22:09:53 INFO InternalParquetRecordReader: RecordReader initialized will read a total of 163064 records.
20/10/19 22:09:53 INFO InternalParquetRecordReader: at row 0. reading next block
20/10/19 22:09:53 INFO CodecPool: Got brand-new decompressor [.gz]
20/10/19 22:09:53 INFO InternalParquetRecordReader: block read in memory in 34 ms. row count = 163064
20/10/19 22:09:56 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1086 bytes result sent to driver
20/10/19 22:09:56 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 3564 ms on 192.168.10.30 (executor driver) (1/1)
20/10/19 22:09:56 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
20/10/19 22:09:56 INFO DAGScheduler: ShuffleMapStage 0 (sortBy at AlignmentDataset.scala:1004) finished in 3.713 s
20/10/19 22:09:56 INFO DAGScheduler: looking for newly runnable stages
20/10/19 22:09:56 INFO DAGScheduler: running: Set()
20/10/19 22:09:56 INFO DAGScheduler: waiting: Set(ResultStage 1)
20/10/19 22:09:56 INFO DAGScheduler: failed: Set()
20/10/19 22:09:56 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[5] at map at GenomicDataset.scala:3805), which has no missing parents
20/10/19 22:09:56 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 86.0 KiB, free 408.5 MiB)
20/10/19 22:09:56 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 32.2 KiB, free 408.4 MiB)
20/10/19 22:09:56 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 192.168.10.30:36157 (size: 32.2 KiB, free: 408.8 MiB)
20/10/19 22:09:56 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1223
20/10/19 22:09:56 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[5] at map at GenomicDataset.scala:3805) (first 15 tasks are for partitions Vector(0))
20/10/19 22:09:56 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks
20/10/19 22:09:56 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, 192.168.10.30, executor driver, partition 0, NODE_LOCAL, 7143 bytes)
20/10/19 22:09:56 INFO Executor: Running task 0.0 in stage 1.0 (TID 1)
20/10/19 22:09:56 INFO ShuffleBlockFetcherIterator: Getting 1 (24.5 MiB) non-empty blocks including 1 (24.5 MiB) local and 0 (0.0 B) host-local and 0 (0.0 B) remote blocks
20/10/19 22:09:56 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 9 ms
20/10/19 22:09:59 INFO BlockManagerInfo: Removed broadcast_1_piece0 on 192.168.10.30:36157 in memory (size: 4.0 KiB, free: 408.8 MiB)
20/10/19 22:09:59 INFO FileOutputCommitter: File Output Committer Algorithm version is 1
20/10/19 22:09:59 INFO CodecConfig: Compression: GZIP
20/10/19 22:09:59 INFO FileOutputCommitter: File Output Committer Algorithm version is 1
20/10/19 22:09:59 INFO ParquetOutputFormat: Parquet block size to 134217728
20/10/19 22:09:59 INFO ParquetOutputFormat: Parquet page size to 1048576
20/10/19 22:09:59 INFO ParquetOutputFormat: Parquet dictionary page size to 1048576
20/10/19 22:09:59 INFO ParquetOutputFormat: Dictionary is on
20/10/19 22:09:59 INFO ParquetOutputFormat: Validation is off
20/10/19 22:09:59 INFO ParquetOutputFormat: Writer version is: PARQUET_1_0
20/10/19 22:09:59 INFO ParquetOutputFormat: Maximum row group padding size is 8388608 bytes
20/10/19 22:09:59 INFO ParquetOutputFormat: Page size checking is: estimated
20/10/19 22:09:59 INFO ParquetOutputFormat: Min row count for page size check is: 100
20/10/19 22:09:59 INFO ParquetOutputFormat: Max row count for page size check is: 10000
20/10/19 22:09:59 INFO CodecPool: Got brand-new compressor [.gz]
20/10/19 22:10:03 INFO InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 16004474
20/10/19 22:10:03 INFO FileOutputCommitter: Saved output of task 'attempt_20201019220952_0005_r_000000_0' to file:/tmp/adamTestbiNCJsI/deleteMePleaseThisIsNoLongerNeeded/mouse_chrM.bam.reads.sorted.adam/_temporary/0/task_20201019220952_0005_r_000000
20/10/19 22:10:03 INFO SparkHadoopMapRedUtil: attempt_20201019220952_0005_r_000000_0: Committed
20/10/19 22:10:03 INFO Executor: Finished task 0.0 in stage 1.0 (TID 1). 1365 bytes result sent to driver
20/10/19 22:10:03 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 6797 ms on 192.168.10.30 (executor driver) (1/1)
20/10/19 22:10:03 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 
20/10/19 22:10:03 INFO DAGScheduler: ResultStage 1 (runJob at SparkHadoopWriter.scala:78) finished in 6.827 s
20/10/19 22:10:03 INFO DAGScheduler: Job 0 is finished. Cancelling potential speculative or zombie tasks for this job
20/10/19 22:10:03 INFO TaskSchedulerImpl: Killing all running tasks in stage 1: Stage finished
20/10/19 22:10:03 INFO DAGScheduler: Job 0 finished: runJob at SparkHadoopWriter.scala:78, took 10.620054 s
20/10/19 22:10:03 INFO ParquetFileReader: Initiating action with parallelism: 5
20/10/19 22:10:03 INFO SparkHadoopWriter: Job job_20201019220952_0005 committed.
20/10/19 22:10:03 INFO SparkContext: Invoking stop() from shutdown hook
20/10/19 22:10:03 INFO SparkUI: Stopped Spark web UI at http://192.168.10.30:4040
20/10/19 22:10:03 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
20/10/19 22:10:03 INFO MemoryStore: MemoryStore cleared
20/10/19 22:10:03 INFO BlockManager: BlockManager stopped
20/10/19 22:10:03 INFO BlockManagerMaster: BlockManagerMaster stopped
20/10/19 22:10:03 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
20/10/19 22:10:03 INFO SparkContext: Successfully stopped SparkContext
20/10/19 22:10:03 INFO ShutdownHookManager: Shutdown hook called
20/10/19 22:10:03 INFO ShutdownHookManager: Deleting directory /tmp/spark-ab2ae25d-1669-4d6e-be15-af4dce3fd3a2
20/10/19 22:10:03 INFO ShutdownHookManager: Deleting directory /tmp/spark-d89c3aa9-0f6f-4d20-b6e2-f22e2faff13b

# convert the reads to fragments to re-pair the reads
echo "Converting read file to fragments"
+ echo 'Converting read file to fragments'
Converting read file to fragments
rm -rf ${FRAGMENTS}
+ rm -rf mouse_chrM.bam.fragments.adam
${ADAM} transformFragments -load_as_alignments ${READS} ${FRAGMENTS}
+ ./bin/adam-submit transformFragments -load_as_alignments mouse_chrM.bam.reads.adam mouse_chrM.bam.fragments.adam
Using ADAM_MAIN=org.bdgenomics.adam.cli.ADAMMain
Using spark-submit=/tmp/adamTestbiNCJsI/deleteMePleaseThisIsNoLongerNeeded/spark-3.0.1-bin-hadoop2.7/bin/spark-submit
20/10/19 22:10:05 WARN Utils: Your hostname, ubuntu-testing resolves to a loopback address: 127.0.1.1; using 192.168.10.30 instead (on interface eno1)
20/10/19 22:10:05 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
20/10/19 22:10:05 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
log4j:WARN No appenders could be found for logger (org.bdgenomics.adam.cli.ADAMMain).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
20/10/19 22:10:06 INFO SparkContext: Running Spark version 3.0.1
20/10/19 22:10:06 INFO ResourceUtils: ==============================================================
20/10/19 22:10:06 INFO ResourceUtils: Resources for spark.driver:

20/10/19 22:10:06 INFO ResourceUtils: ==============================================================
20/10/19 22:10:06 INFO SparkContext: Submitted application: transformFragments
20/10/19 22:10:06 INFO SecurityManager: Changing view acls to: jenkins
20/10/19 22:10:06 INFO SecurityManager: Changing modify acls to: jenkins
20/10/19 22:10:06 INFO SecurityManager: Changing view acls groups to: 
20/10/19 22:10:06 INFO SecurityManager: Changing modify acls groups to: 
20/10/19 22:10:06 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(jenkins); groups with view permissions: Set(); users  with modify permissions: Set(jenkins); groups with modify permissions: Set()
20/10/19 22:10:06 INFO Utils: Successfully started service 'sparkDriver' on port 46075.
20/10/19 22:10:06 INFO SparkEnv: Registering MapOutputTracker
20/10/19 22:10:06 INFO SparkEnv: Registering BlockManagerMaster
20/10/19 22:10:06 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20/10/19 22:10:06 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
20/10/19 22:10:06 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
20/10/19 22:10:06 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-3a066110-1f29-4cae-88da-8287f7496d2f
20/10/19 22:10:06 INFO MemoryStore: MemoryStore started with capacity 408.9 MiB
20/10/19 22:10:06 INFO SparkEnv: Registering OutputCommitCoordinator
20/10/19 22:10:06 INFO Utils: Successfully started service 'SparkUI' on port 4040.
20/10/19 22:10:06 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.10.30:4040
20/10/19 22:10:06 INFO SparkContext: Added JAR file:/tmp/adamTestbiNCJsI/deleteMePleaseThisIsNoLongerNeeded/adam-assembly/target/adam-assembly-spark3_2.12-0.33.0-SNAPSHOT.jar at spark://192.168.10.30:46075/jars/adam-assembly-spark3_2.12-0.33.0-SNAPSHOT.jar with timestamp 1603170606982
20/10/19 22:10:07 INFO Executor: Starting executor ID driver on host 192.168.10.30
20/10/19 22:10:07 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 35501.
20/10/19 22:10:07 INFO NettyBlockTransferService: Server created on 192.168.10.30:35501
20/10/19 22:10:07 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
20/10/19 22:10:07 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.10.30, 35501, None)
20/10/19 22:10:07 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.10.30:35501 with 408.9 MiB RAM, BlockManagerId(driver, 192.168.10.30, 35501, None)
20/10/19 22:10:07 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.10.30, 35501, None)
20/10/19 22:10:07 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.10.30, 35501, None)
20/10/19 22:10:07 INFO ADAMContext: Loading mouse_chrM.bam.reads.adam as Parquet of Alignments.
20/10/19 22:10:09 INFO ADAMContext: Reading the ADAM file at mouse_chrM.bam.reads.adam to create RDD
20/10/19 22:10:09 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 309.3 KiB, free 408.6 MiB)
20/10/19 22:10:09 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 27.9 KiB, free 408.6 MiB)
20/10/19 22:10:09 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.10.30:35501 (size: 27.9 KiB, free: 408.9 MiB)
20/10/19 22:10:09 INFO SparkContext: Created broadcast 0 from newAPIHadoopFile at ADAMContext.scala:1797
20/10/19 22:10:09 INFO FileInputFormat: Total input paths to process : 1
20/10/19 22:10:09 INFO ParquetInputFormat: Total input paths to process : 1
20/10/19 22:10:10 INFO RDDBoundFragmentDataset: Saving data in ADAM format
20/10/19 22:10:10 INFO FileOutputCommitter: File Output Committer Algorithm version is 1
20/10/19 22:10:10 INFO SparkContext: Starting job: runJob at SparkHadoopWriter.scala:78
20/10/19 22:10:10 INFO DAGScheduler: Registering RDD 2 (groupBy at SingleReadBucket.scala:97) as input to shuffle 0
20/10/19 22:10:10 INFO DAGScheduler: Got job 0 (runJob at SparkHadoopWriter.scala:78) with 1 output partitions
20/10/19 22:10:10 INFO DAGScheduler: Final stage: ResultStage 1 (runJob at SparkHadoopWriter.scala:78)
20/10/19 22:10:10 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 0)
20/10/19 22:10:10 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 0)
20/10/19 22:10:10 INFO DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[2] at groupBy at SingleReadBucket.scala:97), which has no missing parents
20/10/19 22:10:10 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 7.4 KiB, free 408.6 MiB)
20/10/19 22:10:10 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 4.0 KiB, free 408.6 MiB)
20/10/19 22:10:10 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.10.30:35501 (size: 4.0 KiB, free: 408.9 MiB)
20/10/19 22:10:10 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1223
20/10/19 22:10:10 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[2] at groupBy at SingleReadBucket.scala:97) (first 15 tasks are for partitions Vector(0))
20/10/19 22:10:10 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
20/10/19 22:10:10 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 192.168.10.30, executor driver, partition 0, PROCESS_LOCAL, 7482 bytes)
20/10/19 22:10:10 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
20/10/19 22:10:10 INFO Executor: Fetching spark://192.168.10.30:46075/jars/adam-assembly-spark3_2.12-0.33.0-SNAPSHOT.jar with timestamp 1603170606982
20/10/19 22:10:10 INFO TransportClientFactory: Successfully created connection to /192.168.10.30:46075 after 43 ms (0 ms spent in bootstraps)
20/10/19 22:10:10 INFO Utils: Fetching spark://192.168.10.30:46075/jars/adam-assembly-spark3_2.12-0.33.0-SNAPSHOT.jar to /tmp/spark-d9b3dc8a-4f73-4875-97e9-7eeee9f11434/userFiles-184ecf0d-7273-4590-bf46-8708bfb0f4ae/fetchFileTemp1115435501713764352.tmp
20/10/19 22:10:10 INFO Executor: Adding file:/tmp/spark-d9b3dc8a-4f73-4875-97e9-7eeee9f11434/userFiles-184ecf0d-7273-4590-bf46-8708bfb0f4ae/adam-assembly-spark3_2.12-0.33.0-SNAPSHOT.jar to class loader
20/10/19 22:10:10 INFO NewHadoopRDD: Input split: file:/tmp/adamTestbiNCJsI/deleteMePleaseThisIsNoLongerNeeded/mouse_chrM.bam.reads.adam/part-r-00000.gz.parquet:0+10132211
20/10/19 22:10:11 INFO InternalParquetRecordReader: RecordReader initialized will read a total of 163064 records.
20/10/19 22:10:11 INFO InternalParquetRecordReader: at row 0. reading next block
20/10/19 22:10:11 INFO CodecPool: Got brand-new decompressor [.gz]
20/10/19 22:10:11 INFO InternalParquetRecordReader: block read in memory in 38 ms. row count = 163064
20/10/19 22:10:13 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1086 bytes result sent to driver
20/10/19 22:10:13 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 3483 ms on 192.168.10.30 (executor driver) (1/1)
20/10/19 22:10:13 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
20/10/19 22:10:13 INFO DAGScheduler: ShuffleMapStage 0 (groupBy at SingleReadBucket.scala:97) finished in 3.640 s
20/10/19 22:10:13 INFO DAGScheduler: looking for newly runnable stages
20/10/19 22:10:13 INFO DAGScheduler: running: Set()
20/10/19 22:10:13 INFO DAGScheduler: waiting: Set(ResultStage 1)
20/10/19 22:10:13 INFO DAGScheduler: failed: Set()
20/10/19 22:10:13 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[6] at map at GenomicDataset.scala:3805), which has no missing parents
20/10/19 22:10:13 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 90.1 KiB, free 408.5 MiB)
20/10/19 22:10:13 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 33.7 KiB, free 408.4 MiB)
20/10/19 22:10:13 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 192.168.10.30:35501 (size: 33.7 KiB, free: 408.8 MiB)
20/10/19 22:10:13 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1223
20/10/19 22:10:13 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[6] at map at GenomicDataset.scala:3805) (first 15 tasks are for partitions Vector(0))
20/10/19 22:10:13 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks
20/10/19 22:10:13 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, 192.168.10.30, executor driver, partition 0, NODE_LOCAL, 7143 bytes)
20/10/19 22:10:13 INFO Executor: Running task 0.0 in stage 1.0 (TID 1)
20/10/19 22:10:13 INFO ShuffleBlockFetcherIterator: Getting 1 (26.9 MiB) non-empty blocks including 1 (26.9 MiB) local and 0 (0.0 B) host-local and 0 (0.0 B) remote blocks
20/10/19 22:10:13 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 10 ms
20/10/19 22:10:15 INFO FileOutputCommitter: File Output Committer Algorithm version is 1
20/10/19 22:10:15 INFO CodecConfig: Compression: GZIP
20/10/19 22:10:15 INFO FileOutputCommitter: File Output Committer Algorithm version is 1
20/10/19 22:10:15 INFO ParquetOutputFormat: Parquet block size to 134217728
20/10/19 22:10:15 INFO ParquetOutputFormat: Parquet page size to 1048576
20/10/19 22:10:15 INFO ParquetOutputFormat: Parquet dictionary page size to 1048576
20/10/19 22:10:15 INFO ParquetOutputFormat: Dictionary is on
20/10/19 22:10:15 INFO ParquetOutputFormat: Validation is off
20/10/19 22:10:15 INFO ParquetOutputFormat: Writer version is: PARQUET_1_0
20/10/19 22:10:15 INFO ParquetOutputFormat: Maximum row group padding size is 8388608 bytes
20/10/19 22:10:15 INFO ParquetOutputFormat: Page size checking is: estimated
20/10/19 22:10:15 INFO ParquetOutputFormat: Min row count for page size check is: 100
20/10/19 22:10:15 INFO ParquetOutputFormat: Max row count for page size check is: 10000
20/10/19 22:10:15 INFO CodecPool: Got brand-new compressor [.gz]
20/10/19 22:10:17 INFO BlockManagerInfo: Removed broadcast_1_piece0 on 192.168.10.30:35501 in memory (size: 4.0 KiB, free: 408.8 MiB)
20/10/19 22:10:22 INFO InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 21417928
20/10/19 22:10:22 INFO FileOutputCommitter: Saved output of task 'attempt_20201019221010_0006_r_000000_0' to file:/tmp/adamTestbiNCJsI/deleteMePleaseThisIsNoLongerNeeded/mouse_chrM.bam.fragments.adam/_temporary/0/task_20201019221010_0006_r_000000
20/10/19 22:10:22 INFO SparkHadoopMapRedUtil: attempt_20201019221010_0006_r_000000_0: Committed
20/10/19 22:10:22 INFO Executor: Finished task 0.0 in stage 1.0 (TID 1). 1365 bytes result sent to driver
20/10/19 22:10:22 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 9021 ms on 192.168.10.30 (executor driver) (1/1)
20/10/19 22:10:22 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 
20/10/19 22:10:22 INFO DAGScheduler: ResultStage 1 (runJob at SparkHadoopWriter.scala:78) finished in 9.058 s
20/10/19 22:10:22 INFO DAGScheduler: Job 0 is finished. Cancelling potential speculative or zombie tasks for this job
20/10/19 22:10:22 INFO TaskSchedulerImpl: Killing all running tasks in stage 1: Stage finished
20/10/19 22:10:22 INFO DAGScheduler: Job 0 finished: runJob at SparkHadoopWriter.scala:78, took 12.789150 s
20/10/19 22:10:22 INFO ParquetFileReader: Initiating action with parallelism: 5
20/10/19 22:10:22 INFO SparkHadoopWriter: Job job_20201019221010_0006 committed.
20/10/19 22:10:23 INFO SparkContext: Invoking stop() from shutdown hook
20/10/19 22:10:23 INFO SparkUI: Stopped Spark web UI at http://192.168.10.30:4040
20/10/19 22:10:23 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
20/10/19 22:10:23 INFO MemoryStore: MemoryStore cleared
20/10/19 22:10:23 INFO BlockManager: BlockManager stopped
20/10/19 22:10:23 INFO BlockManagerMaster: BlockManagerMaster stopped
20/10/19 22:10:23 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
20/10/19 22:10:23 INFO SparkContext: Successfully stopped SparkContext
20/10/19 22:10:23 INFO ShutdownHookManager: Shutdown hook called
20/10/19 22:10:23 INFO ShutdownHookManager: Deleting directory /tmp/spark-d9b3dc8a-4f73-4875-97e9-7eeee9f11434
20/10/19 22:10:23 INFO ShutdownHookManager: Deleting directory /tmp/spark-ea241e82-2ec5-4119-be6b-9031558ffca2

# test that printing works
echo "Printing reads and fragments"
+ echo 'Printing reads and fragments'
Printing reads and fragments
${ADAM} print ${READS} 1>/dev/null 2>/dev/null
+ ./bin/adam-submit print mouse_chrM.bam.reads.adam
${ADAM} print ${FRAGMENTS} 1>/dev/null 2>/dev/null
+ ./bin/adam-submit print mouse_chrM.bam.fragments.adam

# run flagstat to verify that flagstat runs OK
echo "Printing read statistics"
+ echo 'Printing read statistics'
Printing read statistics
${ADAM} flagstat ${READS}
+ ./bin/adam-submit flagstat mouse_chrM.bam.reads.adam
Using ADAM_MAIN=org.bdgenomics.adam.cli.ADAMMain
Using spark-submit=/tmp/adamTestbiNCJsI/deleteMePleaseThisIsNoLongerNeeded/spark-3.0.1-bin-hadoop2.7/bin/spark-submit
20/10/19 22:10:40 WARN Utils: Your hostname, ubuntu-testing resolves to a loopback address: 127.0.1.1; using 192.168.10.30 instead (on interface eno1)
20/10/19 22:10:40 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
20/10/19 22:10:41 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
log4j:WARN No appenders could be found for logger (org.bdgenomics.adam.cli.ADAMMain).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
20/10/19 22:10:41 INFO SparkContext: Running Spark version 3.0.1
20/10/19 22:10:41 INFO ResourceUtils: ==============================================================
20/10/19 22:10:41 INFO ResourceUtils: Resources for spark.driver:

20/10/19 22:10:41 INFO ResourceUtils: ==============================================================
20/10/19 22:10:41 INFO SparkContext: Submitted application: flagstat
20/10/19 22:10:41 INFO SecurityManager: Changing view acls to: jenkins
20/10/19 22:10:41 INFO SecurityManager: Changing modify acls to: jenkins
20/10/19 22:10:41 INFO SecurityManager: Changing view acls groups to: 
20/10/19 22:10:41 INFO SecurityManager: Changing modify acls groups to: 
20/10/19 22:10:41 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(jenkins); groups with view permissions: Set(); users  with modify permissions: Set(jenkins); groups with modify permissions: Set()
20/10/19 22:10:41 INFO Utils: Successfully started service 'sparkDriver' on port 46641.
20/10/19 22:10:42 INFO SparkEnv: Registering MapOutputTracker
20/10/19 22:10:42 INFO SparkEnv: Registering BlockManagerMaster
20/10/19 22:10:42 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20/10/19 22:10:42 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
20/10/19 22:10:42 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
20/10/19 22:10:42 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-7c04959e-c2ca-4b89-9a21-c035cc91744a
20/10/19 22:10:42 INFO MemoryStore: MemoryStore started with capacity 408.9 MiB
20/10/19 22:10:42 INFO SparkEnv: Registering OutputCommitCoordinator
20/10/19 22:10:42 INFO Utils: Successfully started service 'SparkUI' on port 4040.
20/10/19 22:10:42 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.10.30:4040
20/10/19 22:10:42 INFO SparkContext: Added JAR file:/tmp/adamTestbiNCJsI/deleteMePleaseThisIsNoLongerNeeded/adam-assembly/target/adam-assembly-spark3_2.12-0.33.0-SNAPSHOT.jar at spark://192.168.10.30:46641/jars/adam-assembly-spark3_2.12-0.33.0-SNAPSHOT.jar with timestamp 1603170642476
20/10/19 22:10:42 INFO Executor: Starting executor ID driver on host 192.168.10.30
20/10/19 22:10:42 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 40879.
20/10/19 22:10:42 INFO NettyBlockTransferService: Server created on 192.168.10.30:40879
20/10/19 22:10:42 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
20/10/19 22:10:42 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.10.30, 40879, None)
20/10/19 22:10:42 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.10.30:40879 with 408.9 MiB RAM, BlockManagerId(driver, 192.168.10.30, 40879, None)
20/10/19 22:10:42 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.10.30, 40879, None)
20/10/19 22:10:42 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.10.30, 40879, None)
20/10/19 22:10:43 INFO ADAMContext: Loading mouse_chrM.bam.reads.adam as Parquet of Alignments.
20/10/19 22:10:43 INFO ADAMContext: Reading the ADAM file at mouse_chrM.bam.reads.adam to create RDD
20/10/19 22:10:43 INFO ADAMContext: Using the specified projection schema
20/10/19 22:10:43 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 320.5 KiB, free 408.6 MiB)
20/10/19 22:10:44 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 28.8 KiB, free 408.6 MiB)
20/10/19 22:10:44 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.10.30:40879 (size: 28.8 KiB, free: 408.9 MiB)
20/10/19 22:10:44 INFO SparkContext: Created broadcast 0 from newAPIHadoopFile at ADAMContext.scala:1797
20/10/19 22:10:45 INFO FileInputFormat: Total input paths to process : 1
20/10/19 22:10:45 INFO ParquetInputFormat: Total input paths to process : 1
20/10/19 22:10:45 INFO SparkContext: Starting job: aggregate at FlagStat.scala:115
20/10/19 22:10:45 INFO DAGScheduler: Got job 0 (aggregate at FlagStat.scala:115) with 1 output partitions
20/10/19 22:10:45 INFO DAGScheduler: Final stage: ResultStage 0 (aggregate at FlagStat.scala:115)
20/10/19 22:10:45 INFO DAGScheduler: Parents of final stage: List()
20/10/19 22:10:45 INFO DAGScheduler: Missing parents: List()
20/10/19 22:10:45 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[2] at map at FlagStat.scala:96), which has no missing parents
20/10/19 22:10:45 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 5.3 KiB, free 408.6 MiB)
20/10/19 22:10:45 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.9 KiB, free 408.6 MiB)
20/10/19 22:10:45 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.10.30:40879 (size: 2.9 KiB, free: 408.9 MiB)
20/10/19 22:10:45 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1223
20/10/19 22:10:45 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[2] at map at FlagStat.scala:96) (first 15 tasks are for partitions Vector(0))
20/10/19 22:10:45 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
20/10/19 22:10:45 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 192.168.10.30, executor driver, partition 0, PROCESS_LOCAL, 7493 bytes)
20/10/19 22:10:45 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
20/10/19 22:10:45 INFO Executor: Fetching spark://192.168.10.30:46641/jars/adam-assembly-spark3_2.12-0.33.0-SNAPSHOT.jar with timestamp 1603170642476
20/10/19 22:10:46 INFO TransportClientFactory: Successfully created connection to /192.168.10.30:46641 after 43 ms (0 ms spent in bootstraps)
20/10/19 22:10:46 INFO Utils: Fetching spark://192.168.10.30:46641/jars/adam-assembly-spark3_2.12-0.33.0-SNAPSHOT.jar to /tmp/spark-99b92479-8748-4ab4-b1a5-d319e7247520/userFiles-867a19aa-ae17-4fd6-ab69-1d85ce8280c6/fetchFileTemp4281060034718570896.tmp
20/10/19 22:10:46 INFO Executor: Adding file:/tmp/spark-99b92479-8748-4ab4-b1a5-d319e7247520/userFiles-867a19aa-ae17-4fd6-ab69-1d85ce8280c6/adam-assembly-spark3_2.12-0.33.0-SNAPSHOT.jar to class loader
20/10/19 22:10:46 INFO NewHadoopRDD: Input split: file:/tmp/adamTestbiNCJsI/deleteMePleaseThisIsNoLongerNeeded/mouse_chrM.bam.reads.adam/part-r-00000.gz.parquet:0+10132211
20/10/19 22:10:46 INFO InternalParquetRecordReader: RecordReader initialized will read a total of 163064 records.
20/10/19 22:10:46 INFO InternalParquetRecordReader: at row 0. reading next block
20/10/19 22:10:46 INFO CodecPool: Got brand-new decompressor [.gz]
20/10/19 22:10:46 INFO InternalParquetRecordReader: block read in memory in 19 ms. row count = 163064
20/10/19 22:10:47 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 988 bytes result sent to driver
20/10/19 22:10:47 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 1411 ms on 192.168.10.30 (executor driver) (1/1)
20/10/19 22:10:47 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
20/10/19 22:10:47 INFO DAGScheduler: ResultStage 0 (aggregate at FlagStat.scala:115) finished in 1.514 s
20/10/19 22:10:47 INFO DAGScheduler: Job 0 is finished. Cancelling potential speculative or zombie tasks for this job
20/10/19 22:10:47 INFO TaskSchedulerImpl: Killing all running tasks in stage 0: Stage finished
20/10/19 22:10:47 INFO DAGScheduler: Job 0 finished: aggregate at FlagStat.scala:115, took 1.575399 s
163064 + 0 in total (QC-passed reads + QC-failed reads)
0 + 0 primary duplicates
0 + 0 primary duplicates - both read and mate mapped
0 + 0 primary duplicates - only read mapped
0 + 0 primary duplicates - cross chromosome
0 + 0 secondary duplicates
0 + 0 secondary duplicates - both read and mate mapped
0 + 0 secondary duplicates - only read mapped
0 + 0 secondary duplicates - cross chromosome
160512 + 0 mapped (98.43%:0.00%)
163064 + 0 paired in sequencing
81524 + 0 read1
81540 + 0 read2
154982 + 0 properly paired (95.04%:0.00%)
158044 + 0 with itself and mate mapped
2468 + 0 singletons (1.51%:0.00%)
418 + 0 with mate mapped to a different chr
120 + 0 with mate mapped to a different chr (mapQ>=5)
20/10/19 22:10:47 INFO SparkContext: Invoking stop() from shutdown hook
20/10/19 22:10:47 INFO SparkUI: Stopped Spark web UI at http://192.168.10.30:4040
20/10/19 22:10:47 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
20/10/19 22:10:47 INFO MemoryStore: MemoryStore cleared
20/10/19 22:10:47 INFO BlockManager: BlockManager stopped
20/10/19 22:10:47 INFO BlockManagerMaster: BlockManagerMaster stopped
20/10/19 22:10:47 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
20/10/19 22:10:47 INFO SparkContext: Successfully stopped SparkContext
20/10/19 22:10:47 INFO ShutdownHookManager: Shutdown hook called
20/10/19 22:10:47 INFO ShutdownHookManager: Deleting directory /tmp/spark-99b92479-8748-4ab4-b1a5-d319e7247520
20/10/19 22:10:47 INFO ShutdownHookManager: Deleting directory /tmp/spark-85904f4b-94c6-4c67-bd90-ce661aca6dde
rm -rf ${ADAM_TMP_DIR}
+ rm -rf /tmp/adamTestbiNCJsI/deleteMePleaseThisIsNoLongerNeeded
popd
+ popd
~/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALA_VERSION/2.12/SPARK_VERSION/3.0.1/label/ubuntu

pushd ${PROJECT_ROOT}
+ pushd /home/jenkins/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALA_VERSION/2.12/SPARK_VERSION/3.0.1/label/ubuntu/scripts/..
~/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALA_VERSION/2.12/SPARK_VERSION/3.0.1/label/ubuntu ~/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALA_VERSION/2.12/SPARK_VERSION/3.0.1/label/ubuntu

# move back to Scala 2.12 as default
if [ ${SCALA_VERSION} == 2.11 ];
then
    set +e
    ./scripts/move_to_scala_2.12.sh
    set -e
fi
+ '[' 2.12 == 2.11 ']'
# move back to Spark 3.x as default
if [ ${SPARK_VERSION} == 2.4.7 ];
then
    set +e
    ./scripts/move_to_spark_3.sh
    set -e
fi
+ '[' 3.0.1 == 2.4.7 ']'

# test that the source is formatted correctly
./scripts/format-source
+ ./scripts/format-source
+++ dirname ./scripts/format-source
++ cd ./scripts
++ pwd
+ DIR=/home/jenkins/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALA_VERSION/2.12/SPARK_VERSION/3.0.1/label/ubuntu/scripts
+ pushd /home/jenkins/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALA_VERSION/2.12/SPARK_VERSION/3.0.1/label/ubuntu/scripts/..
~/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALA_VERSION/2.12/SPARK_VERSION/3.0.1/label/ubuntu ~/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALA_VERSION/2.12/SPARK_VERSION/3.0.1/label/ubuntu
+ mvn org.scalariform:scalariform-maven-plugin:format license:format
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=1g; support was removed in 8.0
[INFO] Scanning for projects...
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Build Order:
[INFO] 
[INFO] ADAM_2.12                                                          [pom]
[INFO] ADAM_2.12: Shader workaround                                       [jar]
[INFO] ADAM_2.12: Avro-to-Dataset codegen utils                           [jar]
[INFO] ADAM_2.12: Core                                                    [jar]
[INFO] ADAM_2.12: APIs for Java, Python                                   [jar]
[INFO] ADAM_2.12: CLI                                                     [jar]
[INFO] ADAM_2.12: Assembly                                                [jar]
[INFO] 
[INFO] ------------< org.bdgenomics.adam:adam-parent-spark3_2.12 >-------------
[INFO] Building ADAM_2.12 0.33.0-SNAPSHOT                                 [1/7]
[INFO] --------------------------------[ pom ]---------------------------------
[INFO] 
[INFO] --- scalariform-maven-plugin:0.1.4:format (default-cli) @ adam-parent-spark3_2.12 ---
[INFO] Modified 2 of 244 .scala files
[INFO] 
[INFO] --- maven-license-plugin:1.10.b1:format (default-cli) @ adam-parent-spark3_2.12 ---
[INFO] Updating license headers...
[INFO] 
[INFO] -------------< org.bdgenomics.adam:adam-shade-spark3_2.12 >-------------
[INFO] Building ADAM_2.12: Shader workaround 0.33.0-SNAPSHOT              [2/7]
[INFO] --------------------------------[ jar ]---------------------------------
[INFO] 
[INFO] --- scalariform-maven-plugin:0.1.4:format (default-cli) @ adam-shade-spark3_2.12 ---
[INFO] Modified 0 of 0 .scala files
[INFO] 
[INFO] --- maven-license-plugin:1.10.b1:format (default-cli) @ adam-shade-spark3_2.12 ---
[INFO] Updating license headers...
[INFO] 
[INFO] ------------< org.bdgenomics.adam:adam-codegen-spark3_2.12 >------------
[INFO] Building ADAM_2.12: Avro-to-Dataset codegen utils 0.33.0-SNAPSHOT  [3/7]
[INFO] --------------------------------[ jar ]---------------------------------
[INFO] 
[INFO] --- scalariform-maven-plugin:0.1.4:format (default-cli) @ adam-codegen-spark3_2.12 ---
[INFO] Modified 0 of 4 .scala files
[INFO] 
[INFO] --- maven-license-plugin:1.10.b1:format (default-cli) @ adam-codegen-spark3_2.12 ---
[INFO] Updating license headers...
[INFO] 
[INFO] -------------< org.bdgenomics.adam:adam-core-spark3_2.12 >--------------
[INFO] Building ADAM_2.12: Core 0.33.0-SNAPSHOT                           [4/7]
[INFO] --------------------------------[ jar ]---------------------------------
[INFO] 
[INFO] --- scalariform-maven-plugin:0.1.4:format (default-cli) @ adam-core-spark3_2.12 ---
[INFO] Modified 0 of 204 .scala files
[INFO] 
[INFO] --- maven-license-plugin:1.10.b1:format (default-cli) @ adam-core-spark3_2.12 ---
[INFO] Updating license headers...
[INFO] 
[INFO] -------------< org.bdgenomics.adam:adam-apis-spark3_2.12 >--------------
[INFO] Building ADAM_2.12: APIs for Java, Python 0.33.0-SNAPSHOT          [5/7]
[INFO] --------------------------------[ jar ]---------------------------------
[INFO] 
[INFO] --- scalariform-maven-plugin:0.1.4:format (default-cli) @ adam-apis-spark3_2.12 ---
[INFO] Modified 0 of 5 .scala files
[INFO] 
[INFO] --- maven-license-plugin:1.10.b1:format (default-cli) @ adam-apis-spark3_2.12 ---
[INFO] Updating license headers...
[INFO] 
[INFO] --------------< org.bdgenomics.adam:adam-cli-spark3_2.12 >--------------
[INFO] Building ADAM_2.12: CLI 0.33.0-SNAPSHOT                            [6/7]
[INFO] --------------------------------[ jar ]---------------------------------
[INFO] 
[INFO] --- scalariform-maven-plugin:0.1.4:format (default-cli) @ adam-cli-spark3_2.12 ---
[INFO] Modified 0 of 29 .scala files
[INFO] 
[INFO] --- maven-license-plugin:1.10.b1:format (default-cli) @ adam-cli-spark3_2.12 ---
[INFO] Updating license headers...
[INFO] 
[INFO] -----------< org.bdgenomics.adam:adam-assembly-spark3_2.12 >------------
[INFO] Building ADAM_2.12: Assembly 0.33.0-SNAPSHOT                       [7/7]
[INFO] --------------------------------[ jar ]---------------------------------
[INFO] 
[INFO] --- scalariform-maven-plugin:0.1.4:format (default-cli) @ adam-assembly-spark3_2.12 ---
[INFO] Modified 0 of 1 .scala files
[INFO] 
[INFO] --- maven-license-plugin:1.10.b1:format (default-cli) @ adam-assembly-spark3_2.12 ---
[INFO] Updating license headers...
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for ADAM_2.12 0.33.0-SNAPSHOT:
[INFO] 
[INFO] ADAM_2.12 .......................................... SUCCESS [  5.893 s]
[INFO] ADAM_2.12: Shader workaround ....................... SUCCESS [  0.026 s]
[INFO] ADAM_2.12: Avro-to-Dataset codegen utils ........... SUCCESS [  0.052 s]
[INFO] ADAM_2.12: Core .................................... SUCCESS [  3.407 s]
[INFO] ADAM_2.12: APIs for Java, Python ................... SUCCESS [  0.179 s]
[INFO] ADAM_2.12: CLI ..................................... SUCCESS [  0.211 s]
[INFO] ADAM_2.12: Assembly ................................ SUCCESS [  0.015 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  10.182 s
[INFO] Finished at: 2020-10-19T22:10:59-07:00
[INFO] ------------------------------------------------------------------------
+ popd
~/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALA_VERSION/2.12/SPARK_VERSION/3.0.1/label/ubuntu
if test -n "$(git status --porcelain)"
then
    echo "Please run './scripts/format-source'"
    exit 1
fi
git status --porcelain
++ git status --porcelain
+ test -n ''
popd    
+ popd
~/workspace/ADAM-prb/HADOOP_VERSION/2.7.5/SCALA_VERSION/2.12/SPARK_VERSION/3.0.1/label/ubuntu

echo
+ echo

echo "All the tests passed"
+ echo 'All the tests passed'
All the tests passed
echo
+ echo

Recording test results
Publishing Scoverage XML and HTML report...
null
Setting commit status on GitHub for https://github.com/bigdatagenomics/adam/commit/217b5d815ab376cac98b664d0b1af8e7dc5b6df0
Finished: SUCCESS