Genomics England Research
Background
One of the goals of The 100,000 Genomes Project from Genomics England is to enable new medical research. Researchers will study how best to use genomics in healthcare and how best to interpret the data to help patients. The causes, diagnosis and treatment of disease will also be investigated. This is currently the largest national sequencing project of its kind in the world.
To achieve this goal Genomics England set up a Research environment for researchers and clinicians. OpenCGA, CellBase and IVA from OpenCB were installed as data platform. We loaded 64,078 whole genomes in OpenCGA, in total about 1 billion unique variants were loaded and indexed in OpenCGA Variant Storage, and all the metadata and clinical data for samples and patients were loaded in OpenCGA Catalog. OpenCGA was able to load and index about 6,000 samples a day, executing the variant annotation and computing different cohort stats for the all the data run in less than a week. In summary, all data was loaded, indexed, annotated and stats calculated in less than 2 weeks. Genomic variants were annotated using CellBase and the IVA front-end was installed for researchers and clinicians to analyse and visualise the data. In this document you can find a full report of about the loading and analysis of the 64,078 genomes.
Genomic and Clinical Data
Clinical data and genomic variants of 64,078 genomes were loaded and indexed in OpenCGA. In total we loaded more than 30,000 VCF files accounting for about 40TB of compressed disk space. Data was organised in four different datasets depending on the genome assembly (GRCh37 or GRCh38) and the type of study (germline or somatic), and this was organised in OpenCGA in three different Projects and four Studies:
OpenCGA Catalog stores all the metadata and clinical data of files, samples, individuals and cohorts. Rare Disease studies also include pedigree metadata by defining families. Also, a Clinical Analysis were defined for each family. Several Variable Sets have been defined to store GEL custom data for all these entities.
Platform
For the Research environment we have used OpenCGA v1.4 using the new Hadoop Variant Storage that use Apache HBase as back-end because of the huge amount of data and analysis needed. We have also used CellBase v4.6 for the variant annotation. Finally we set up a IVA v1.0 web-based variant analysis tool.
The Hadoop cluster consists of about 30 nodes running Hortonworks HDP 2.6.5 (which comes with HBase 1.1.2) and a LSF queue for loading all the VCF files, see this table for more detail:
Genomic Data Load
In order to improve the loading performance, we set up a small LSF queue of ten computing nodes. This configuration allowed us to load multiple files at the same time. We configured LSF to load up to 6 VCF files per node resulting in 60 files being loaded in HBase in parallel without any incidence, by doing this we observed a 50x in loading throughput. This resulted in an average of 125 VCF files loaded per hour in studies RD37 and RD38, which is about 2 files per minute. In the study CG38 the performance was 240 VCF files per hour or about 4 files per minute.
Rare Disease Loading Performance
The files from Rare Disease studies (RD38 & RD37) contain 2 samples per file on average. This results in larger files, increasing the loading time compared with single-sample files. As mentioned above the loading performance was about 125 files per hour or 3,000 files per day. In terms of number of samples it is about 250 samples per hour or 6,000 samples a day.
The loading performance always depend on the number of variants and concurrent files being loaded, the performance was quite stable during the load and performance degradation was observed as can be seen here:
Saturation Study
As part of the data loading process we decided to study the number of unique variants added in each batch of 500 samples. We generated this saturation plot for RD38:
Cancer Loading Performance
The files from Cancer Germline studies (CG38) contain one sample per file. Compared with the Rare Disease, these files are smaller in size, therefore, as expected the file load was almost 2x faster. As mentioned above, the loading performance was about 240 genomes per hour or 5,800 files per day. In terms of number of samples it is about 5,800 samples a day, which is consistent with Rare Disease performance.
Analysis Benchmark
In this section you can find information about the performance of main variant storage operations and most common queries and clinical analysis. Please, for data loading performance information go to section Genomic Data Load above.
Variant Storage Operations
Variant Storage operations take care of preparing the data for executing queries and analysis. There are two main operations: Variant Annotation and Cohort Stats Calculation.
Variant Annotation
This operation uses the CellBase to annotate each unique variant in the database, this annotation include consequence types, population frequencies, conservation scores clinical info, ... and will be typically used for variant queries and clinical analysis. Variant annotation of the 585 million unique variants of project GRCh38 Germline took about 3 days, about 200 million variants were annotated per day.
Cohort Stats Calculation
Cohort Stats are used for filtering variants in a similar way as the population frequencies. A set of cohorts were defined in each study.
ALL with all samples in the study
PARENTS with all parents in the study (only for Rare Disease studies)
UNAFF_PARENTS with all unaffected parents in the study (only for Rare Disease studies)
Pre-computing stats for different cohort and ten of thousands of samples is a high-performance operation that run in less than 2 hours for each study.
Query and Aggregation Stats
To study the performance we used RD38 which the largest study with 438 million variants and 33,000 samples. We first run some queries to the aggregated data filtering by variant annotation and cohort stats. We were interested in the different index performance so we limit the results to be returned the first 10 variants excluding the genotypic data of the 33,000 samples, by doing this we remove the effect of reading from disk or transferring data through the network which is very variable across different clusters. For queries using patient data go to the next section. Here you can find some of the common queries executed.
As can be observed most queries run below 1 second, you can combine as many filters as wanted.
Clinical Analysis
We also use here RD38 which is the largest study. Clinical queries, or sample queries, enforces queries to return variants of a specific set of samples. These queries can use all the filters from the general queries. The result here also includes a pathogenic prediction for each variant, which determines possible conditions associated to the variant.
As it can be observed most of the family clinical analysis run in less than 2 seconds in the largest study with 33,000 samples.
User Interfaces
Several user interfaces have been developed to query and analyse data from OpenCGA: IVA web-based tool, Python and R clients, and a command line.
IVA
IVA v1.0.3 was installed to provide a friendly web-based analysis tool to browse variants and execute clinical analysis.
Command line
You can also query variants efficiently using the command line built in. Performance depends on the number of samples fetched and the RPC used (REST or gRPC), in the best scenario you can fetch few thousands variants per second. You can see a simple example here producing a VCF file:
Support
OpenCB team is setting up Zetta Genomics, a start-up to offer support, consultancy and custom feature development. We have partnered with Microsoft Azure to ensure OpenCB Suite runs efficiently in Microsoft Azure cloud. We are running a proof-of-concept at the moment with GEL data to benchmark and test Azure.
Acknowledgements
We would like to thank Genomics England very much for their support and for trusting in OpenCGA and the rest of OpenCB Suite for this amazing release. In particular, we would like to thank Augusto Rendon, Anna Need, Carolyn Tregidgo, Frank Nankivell and Chris Odhams for their support, test and valuable feedback.
Last updated