% NOTE -- ONLY EDIT THE .Rnw FILE!!! The .tex file is % likely to be overwritten. % %\VignetteIndexEntry{Using SRAdb to Query the Sequence Read Archive} %\VignetteKeywords{tutorial, sequencing, sql, data} % %\VignetteDepends{RSQLite, RCurl} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \documentclass[12pt]{article} \usepackage{amsmath,fullpage} \usepackage{hyperref} \newcommand{\R}{{\textsf{R}}} \newcommand{\code}[1]{{\texttt{#1}}} \newcommand{\term}[1]{{\emph{#1}}} \newcommand{\Rpackage}[1]{\textsf{#1}} \newcommand{\Rfunction}[1]{\texttt{#1}} \newcommand{\Robject}[1]{\texttt{#1}} \newcommand{\Rclass}[1]{{\textit{#1}}} \newcommand{\Rmethod}[1]{{\textit{#1}}} \newcommand{\Rfunarg}[1]{{\textit{#1}}} \begin{document} \SweaveOpts{concordance=TRUE} \title{Using the \Rpackage{SRAdb} Package to Query the Sequence Read Archive} \author{Jack Zhu\footnote{zhujack@mail.nih.gov} and Sean Davis\footnote{sdavis2@mail.nih.gov}\\ \\ Genetics Branch, Center for Cancer Research,\\ National Cancer Institute,\\ National Institutes of Health } <>= options(width=50) @ \maketitle \section{Introduction} High throughput sequencing technologies have very rapidly become standard tools in biology. The data that these machines generate are large, extremely rich. As such, the Sequence Read Archives (SRA) have been set up at NCBI in the United States, EMBL in Europe, and DDBJ in Japan to capture these data in public repositories in much the same spirit as MIAME-compliant microarray databases like NCBI GEO and EBI ArrayExpress. Accessing data in SRA requires finding it first. This R package provides a convenient and powerful framework to do just that. In addition, \Rpackage{SRAdb} features functionality to determine availability of sequence files and to download files of interest. SRA currently store aligned reads or other processed data that relies on alignment to a reference genome. Please refer to the SRA handbook (http://www.ncbi.nlm.nih.gov/books/NBK47537/) for details. NCBI GEO also often contain aligned reads for sequencing experiments and the \Rpackage{SRAdb} package can help to provide links to these data as well. In combination with the \Rpackage{GEOmetadb} and \Rpackage{GEOquery} packages, these data are also, then, accessible. \begin{figure} \includegraphics[width=1.0\textwidth]{SRAdiagram} \caption{A graphical representation (sometimes called an \textit{Entity-Relationship Diagram}) of the relationships between the main tables in the \Rpackage{SRAdb} package.} \label{figure:ERD} \end{figure} \section{Getting Started} Since SRA is a continuously growing repository, the \Rpackage{SRAdb} SQLite file is updated regularly. The first step, then, is to get the \Rpackage{SRAdb} SQLite file from the online location. The download and uncompress steps are done automatically with a single command, getSRAdbFile. <<>>= library(SRAdb) sqlfile <- file.path(system.file('extdata', package='SRAdb'), 'SRAmetadb_demo.sqlite') @ Note: the above "SRAmetadb\_demo.sqlite" is a down-sized demo SRAmetadb sqlite database. The actual SRAmetadb sqlite database can be downloaded using function: getSRAdbFile. Warning: the actual SRAmetadb sqlite database is pretty large (> 35GB as of May, 2018) after uncompression. So, downloading and uncompressing of the actual SRAmetadb sqlite could take quite a few minutes depending on your network bandwidth. Direct links for downloading the SRAmetadb sqlite database: https://s3.amazonaws.com/starbuck1/sradb/SRAmetadb.sqlite.gz https://gbnci-abcc.ncifcrf.gov/backup/SRAmetadb.sqlite.gz . If interested, it can be timed using the following commands: <>= timeStart <- proc.time() sqlfile <- getSRAdbFile() proc.time() - timeStart @ Since this SQLite file is of key importance in \Rpackage{SRAdb}, it is perhaps of some interest to know some details about the file itself. <<>>= file.info(sqlfile) @ Then, create a connection for later queries. The standard \Rpackage{DBI} functionality as implemented in \Rpackage{RSQLite} function \Rfunction{dbConnect} makes the connection to the database. The \Rfunction{dbDisconnect} function disconnects the connection. <<>>= sra_con <- dbConnect(SQLite(),sqlfile) @ For further details, at this time see \code{help('SRAdb-package')}. \section{Using the \Rpackage{SRAdb} package} \subsection{Interacting with the database} The functionality covered in this section is covered in much more detail in the \Rpackage{DBI} and \Rpackage{RSQLite} package documentation. We cover enough here only to be useful. The \Rfunction{dbListTables} function lists all the tables in the SQLite database handled by the connection object \Robject{sra\_con} created in the previous section. A simplified illustration of the relationship between the SRA main data types is shown in the Figure 1. <<>>= sra_tables <- dbListTables(sra_con) sra_tables @ There is also the \Rfunction{dbListFields} function that can list database fields associated with a table. <<>>= dbListFields(sra_con,"study") @ Sometimes it is useful to get the actual SQL schema associated with a table. Here, we get the table schema for the \textit{study} table: <<>>= dbGetQuery(sra_con,'PRAGMA TABLE_INFO(study)') @ The table "col\_desc" contains information of filed name, type, descritption and default values: <<>>= colDesc <- colDescriptions(sra_con=sra_con)[1:5,] colDesc[, 1:4] @ \subsection{Writing SQL queries and getting results} Select 3 records from the \textit{study} table and show the first 5 columns: <>= rs <- dbGetQuery(sra_con,"select * from study limit 3") rs[, 1:3] @ Get the SRA study accessions and titles from SRA study that study\_type contains ``Transcriptome''. The ``\%'' sign is used in combination with the ``like'' operator to do a ``wildcard'' search for the term ``Transcriptome'' with any number of characters after it. <>= rs <- dbGetQuery(sra_con, paste( "select study_accession, study_title from study where", "study_description like 'Transcriptome%'",sep=" ")) rs[1:3,] @ Of course, we can combine programming and data access. A simple \Rfunction{sapply} example shows how to query each of the tables for number of records. <<>>= getTableCounts <- function(tableName,conn) { sql <- sprintf("select count(*) from %s",tableName) return(dbGetQuery(conn,sql)[1,1]) } do.call(rbind,sapply(sra_tables[c(2,4,5,11,12)], getTableCounts, sra_con, simplify=FALSE)) @ Get some high-level statistics could be to helpful to get overall idea about what data are availble in the SRA database. List all study types and number of studies contained for each of the type: <<>>= rs <- dbGetQuery(sra_con, paste( "SELECT study_type AS StudyType, count( * ) AS Number FROM `study` GROUP BY study_type order by Number DESC ", sep="")) rs @ List all Instrument Models and number of experiments for each of the Instrument Models: <<>>= rs <- dbGetQuery(sra_con, paste( "SELECT instrument_model AS 'Instrument Model', count( * ) AS Experiments FROM `experiment` GROUP BY instrument_model order by Experiments DESC", sep="")) rs @ List all types of library strategies and number of runs for each of them: <<>>= rs <- dbGetQuery(sra_con, paste( "SELECT library_strategy AS 'Library Strategy', count( * ) AS Runs FROM `experiment` GROUP BY library_strategy order by Runs DESC", sep="")) rs @ \subsection{Conversion of SRA entity types} Large-scale consumers of SRA data might want to convert SRA entity type from one to others, e.g. finding all experiment accessions (SRX, ERX or DRX) and run accessions (SRR, ERR or DRR) associated with "SRP001007" and "SRP000931". Function sraConvert does the conversion with a very fast mapping between entity types. Covert "SRP001007" and "SRP000931" to other possible types in the SRAmetadb\_demo.sqlite: <<>>= conversion <- sraConvert( c('SRP001007','SRP000931'), sra_con = sra_con ) conversion[1:3,] @ Check what SRA types and how many entities for each type: <<>>= apply(conversion, 2, unique) @ \subsection{Full text search} Searching by regular table and field specific SQL commands can be very powerful and if you are familiar with SQL language and the table structure. If not, SQLite has a very handy module called Full text search (fts3), which allow users to do Google like search with terms and operators. The function getSRA does Full text search against all fields in a fts3 table with terms constructed with the Standard Query Syntax and Enhanced Query Syntax. Please see http://www.sqlite.org/fts3.html for detail. Find all run and study combined records in which any given fields has "breast" and "cancer" words, including "breast" and "cancer" are not next to each other: <<>>= rs <- getSRA( search_terms = "breast cancer", out_types = c('run','study'), sra_con ) dim(rs) rs <- getSRA( search_terms = "breast cancer", out_types = c("submission", "study", "sample", "experiment", "run"), sra_con ) # get counts for some information interested apply( rs[, c('run','sample','study_type','platform', 'instrument_model')], 2, function(x) {length(unique(x))} ) @ If you only want SRA records containing exact phrase of "breast cancer", in which "breast" and "cancer" do not have other characters between other than a space: <<>>= rs <- getSRA (search_terms ='"breast cancer"', out_types=c('run','study'), sra_con) dim(rs) @ Find all sample records containing words of either "MCF7" or "MCF-7": <<>>= rs <- getSRA( search_terms ='MCF7 OR "MCF-7"', out_types = c('sample'), sra_con ) dim(rs) @ Find all submissions by GEO: <<>>= rs <- getSRA( search_terms ='submission_center: GEO', out_types = c('submission'), sra_con ) dim(rs) @ Find study records containing a word beginning with 'Carcino': <<>>= rs <- getSRA( search_terms ='Carcino*', out_types = c('study'), sra_con=sra_con ) dim(rs) @ \subsection{Download SRA data files} List ftp addresses of the fastq files associated with "SRX000122": <<>>= rs = listSRAfile( c("SRX000122"), sra_con, fileType = 'sra' ) @ The above function does not check file availability, size and date of the sra data files on the server, but the function getSRAinfo does this, which is good to know if you are preparing to download them: <<>>= # rs = getSRAinfo ( c("SRX000122"), sra_con, sraType = "sra" ) # rs[1:3,] @ Next you might want to download sra data files from the ftp site. The getSRAfile function will download all available sra data files associated with "SRR000648" and "SRR000657" from the NCBI SRA ftp site to the current directory: <>= getSRAfile( c("SRR000648","SRR000657"), sra_con, fileType = 'sra' ) @ Then downloaded sra data files can be easily converted into fastq files using fastq-dump in SRA Toolkit ( http://trace.ncbi.nlm.nih.gov/Traces/sra/sra.cgi?view=software ): <>= system ("fastq-dump SRR000648.sra") @ Or directly download fastq files from EBI using ftp protocol: <>= getFASTQinfo( c("SRR000648","SRR000657"), sra_con, srcType = 'ftp' ) getSRAfile( c("SRR000648","SRR000657"), sra_con, fileType = 'fastq' ) @ \subsection{Download SRA data files using fasp protocol} Curretly both NCBI and EBI supports fasp protocol for downloading SRA data files, which has several advantages over ftp protocol, including high-speed transfering large files over long distance. Please check EBI or NCBI web site or Aspera (http://www.asperasoft.com/) for details. SRAdb has indcluded two wraper functions for using ascp command line program (fasp protocol) to download SRA data files frm either the NCBI or EBI, which is included in in Aspera Connect software. But, due to complexity of installaton of the software and options within it, the funcitons develpped here ask users to supply main ascp comands. Download fastq files from EBI ftp siteusing fasp protocol: <>= ## List fasp addresses for associated fastq files: listSRAfile ( c("SRX000122"), sra_con, fileType = 'fastq', srcType='fasp') ## get fasp addresses for associated fastq files: getFASTQinfo( c("SRX000122"), sra_con, srcType = 'fasp' ) ## download fastq files using fasp protocol: # the following ascpCMD needs to be constructed according custom # system configuration # common ascp installation in a Linux system: ascpCMD <- 'ascp -QT -l 300m -i /usr/local/aspera/connect/etc/asperaweb_id_dsa.putty' ## common ascpCMD for a Mac OS X system: # ascpCMD <- "'/Applications/Aspera Connect.app/Contents/ # Resources/ascp' -QT -l 300m -i '/Applications/ # Aspera Connect.app/Contents/Resources/asperaweb_id_dsa.putty'" getSRAfile( c("SRX000122"), sra_con, fileType = 'fastq', srcType = 'fasp', ascpCMD = ascpCMD ) @ Download sra files from NCBI using fasp protocol: <>= ## List fasp addresses of sra files associated with "SRX000122" listSRAfile( c("SRX000122"), sra_con, fileType = 'sra', srcType='fasp') ## download sra files using fasp protocol getSRAfile( c("SRX000122"), sra_con, fileType = 'sra', srcType = 'fasp', ascpCMD = ascpCMD ) @ The downloading messege will show signigicant faster downloading speed than the ftp protocol: ' SRR000658.sra 100% 155MB 291Mb/s 00:05 Completed: 159492K bytes transferred in 5 seconds (249247K bits/sec), in 1 file. ... ' \section{Interactive views of sequence data} Working with sequence data is often best done interactively in a genome browser, a task not easily done from R itself. We have found the Integrative Genomics Viewer (IGV) a high-performance visualization tool for interactive exploration of large, integrated datasets, increasing usefully for visualizing sequence alignments. In \Rpackage{SRAdb}, functions \Rfunction{startIGV}, \Rfunction{load2IGV} and \Rfunction{load2newIGV} provide convenient functionality for R to interact with IGV. Note that for some OS, these functions might not work or work well. Launch IGV with 2 GB maximum usable memory support: <>= startIGV("mm") @ IGV offers a remort control port that allows R to communicate with IGV. The current command set is fairly limited, but it does allow for some IGV operations to be performed in the R console. To utilize this functionality, be sure that IGV is set to allow communication via the ``enable port'' option in IGV preferences. To load BAM files to IGV and then manipulate the window: <>= exampleBams = file.path(system.file('extdata',package='SRAdb'), dir(system.file('extdata',package='SRAdb'),pattern='bam$')) sock <- IGVsocket() IGVgenome(sock, 'hg18') IGVload(sock, exampleBams) IGVgoto(sock, 'chr1:1-1000') IGVsnapshot(sock) @ \section{Graphic view of SRA entities} \begin{figure} \includegraphics[width=1.0\textwidth]{sraGraph} \caption{A graphical representation of the relationships between the SRA entities.} \label{figure:graph} \end{figure} Due to the nature of SRA data and its design, sometimes it is hard to get a whole picture of the relationship between a set of SRA entities. Functions of \Rfunction{entityGraph} and \Rfunction{sraGraph} in this package generate graphNEL objects with edgemode='directed' from input data.frame or directly from search terms, and then the \Rfunction{plot} function can easily draw a diagram. Create a graphNEL object directly from full text search results of terms 'primary thyroid cell line' <>= library(SRAdb) library(Rgraphviz) g <- sraGraph('primary thyroid cell line', sra_con) attrs <- getDefaultAttrs(list(node=list( fillcolor='lightblue', shape='ellipse'))) plot(g, attrs=attrs) ## similiar search as the above, returned much larger data.frame and graph is too clouded g <- sraGraph('Ewing Sarcoma', sra_con) plot(g) @ Please see the Figure 2 for an example diagram. It's considered good practise to explicitely disconnect from the database once we are done with it: <<>>= dbDisconnect(sra_con) @ \section{Example use case} This sesection will use the functionalities in the \Rpackage{SRAdb} package to explore data from the 1000 genomes project. Mainly, 1. Get some statistics of meta data and data files from the 1000 genomes project using the \Rpackage{SRAdb} 2. Download data files 3. Load bam files into the IGV from R 4. Create some snapshoots programmtically from R <>= library(SRAdb) setwd('1000g') if( ! file.exists('SRAmetadb.sqlite') ) { sqlfile <- getSRAdbFile() } else { sqlfile <- 'SRAmetadb.sqlite' } sra_con <- dbConnect(SQLite(),sqlfile) ## get all related accessions rs <- getSRA( search_terms = '"1000 Genomes Project"', sra_con=sra_con, acc_only=TRUE) dim(rs) head(rs) ## get counts for each data types apply( rs, 2, function(x) {length(unique(x))} ) @ After you decided what data from the 1000 Genomes, you would like to download data files from the SRA. But, it might be helpful to know file size before downloading them: <>= runs <- tail(rs$run) fs <- getSRAinfo( runs, sra_con, sraType = "sra" ) @ Now you can download the files through ftp protocol: <>= getSRAfile( runs, sra_con, fileType ='sra', srcType = "ftp" ) @ Or, you can download them through fasp protocol: <>= ascpCMD <- "'/Applications/Aspera Connect.app/Contents/Resources/ascp' -QT -l 300m -i '/Applications/Aspera Connect.app/Contents/Resources/asperaweb_id_dsa.putty'" sra_files = getSRAfile( runs, sra_con, fileType ='sra', srcType = "fasp", ascpCMD = ascpCMD ) @ Next you might want to convert the downloaded sra files into fastq files: <>= for( fq in basename(sra_files$fasp) ) { system ("fastq-dump SRR000648.lite.sra") } @ ... to be compeleted. \section{sessionInfo} <>= toLatex(sessionInfo()) @ \end{document}