--- title: Using scran to analyze single-cell RNA-seq data author: - name: Aaron Lun email: infinite.monkeys.with.keyboards@gmail.com date: "Revised: 1 November 2019" output: BiocStyle::html_document: toc_float: true package: scran bibliography: ref.bib vignette: > %\VignetteIndexEntry{Using scran to analyze scRNA-seq data} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r, echo=FALSE, results="hide", message=FALSE} require(knitr) opts_chunk$set(error=FALSE, message=FALSE, warning=FALSE) ``` ```{r setup, echo=FALSE, message=FALSE} library(scran) library(BiocParallel) register(SerialParam()) # avoid problems with fastMNN parallelization. set.seed(100) ``` # Introduction Single-cell RNA sequencing (scRNA-seq) is a widely used technique for profiling gene expression in individual cells. This allows molecular biology to be studied at a resolution that cannot be matched by bulk sequencing of cell populations. The `r Biocpkg("scran")` package implements methods to perform low-level processing of scRNA-seq data, including cell cycle phase assignment, scaling normalization, variance modelling and testing for corrrelated genes. This vignette provides brief descriptions of these methods and some toy examples to demonstrate their use. **Note:** A more comprehensive description of the use of `r Biocpkg("scran")` (along with other packages) in a scRNA-seq analysis workflow is available at https://osca.bioconductor.org. # Setting up the data We start off with a count matrix where each row is a gene and each column is a cell. These can be obtained by mapping read sequences to a reference genome, and then counting the number of reads mapped to the exons of each gene. (See, for example, the `r Biocpkg("Rsubread")` package to do both of these tasks.) Alternatively, pseudo-alignment methods can be used to quantify the abundance of each transcript in each cell. For simplicity, we will pull out an existing dataset from the `r Biocpkg("scRNAseq")` package. ```{r} library(scRNAseq) sce <- GrunPancreasData() sce ``` This particular dataset is taken from a study of the human pancreas with the CEL-seq protocol [@grun2016denovo]. It is provided as a `SingleCellExperiment` object (from the `r Biocpkg("SingleCellExperiment")` package), which contains the raw data and various annotations. We perform some cursory quality control to remove cells with low total counts or high spike-in percentages: ```{r} library(scater) qcstats <- perCellQCMetrics(sce) qcfilter <- quickPerCellQC(qcstats, percent_subsets="altexps_ERCC_percent") sce <- sce[,!qcfilter$discard] summary(qcfilter$discard) ``` # Normalizing cell-specific biases ## Based on the gene counts Cell-specific biases are normalized using the `computeSumFactors` method, which implements the deconvolution strategy for scaling normalization [@lun2016pooling]. This computes size factors that are used to scale the counts in each cell. The assumption is that most genes are not differentially expressed (DE) between cells, such that any differences in expression across the majority of genes represents some technical bias that should be removed. ```{r} library(scran) clusters <- quickCluster(sce) sce <- computeSumFactors(sce, clusters=clusters) summary(sizeFactors(sce)) ``` For larger data sets, clustering should be performed with the `quickCluster` function before normalization. Briefly, cells are grouped into clusters of similar expression; normalization is applied within each cluster to compute size factors for each cell; and the factors are rescaled by normalization between clusters. This reduces the risk of violating the above assumption when many genes are DE between clusters in a heterogeneous population. Note that `computeSumFactors` will automatically remove low-abundance genes, which provides some protection against zero or negative size factor estimates. We also assume that quality control on the cells has already been performed, as low-quality cells with few expressed genes can often have negative size factor estimates. ## Based on the spike-in counts An alternative approach is to normalize based on the spike-in counts [@lun2017assessing]. The idea is that the same quantity of spike-in RNA was added to each cell prior to library preparation. Size factors are computed to scale the counts such that the total coverage of the spike-in transcripts is equal across cells. The main practical difference is that spike-in normalization preserves differences in total RNA content between cells, whereas `computeSumFactors` and other non-DE methods do not. ```{r} sce2 <- computeSpikeFactors(sce, "ERCC") summary(sizeFactors(sce2)) ``` ## Computing normalized expression values Normalized expression values are calculated using the `logNormCounts()` method from `r Biocpkg("scater")` [@mccarthy2017scater]. This will use the deconvolution size factors for the endogenous genes, and the spike-in-based size factors for the spike-in transcripts. Each expression value can be interpreted as a log-transformed "normalized count", and can be used in downstream applications like clustering or dimensionality reduction. ```{r} sce <- logNormCounts(sce) ``` # Variance modelling We identify genes that drive biological heterogeneity in the data set by modelling the per-gene variance. The aim is use a subset of highly variable genes in downstream analyses like clustering, to improve resolution by removing genes driven by technical noise. We decompose the total variance of each gene into its biological and technical components by fitting a trend to the endogenous variances [@lun2016step]. The fitted value of the trend is used as an estimate of the technical component, and we subtract the fitted value from the total variance to obtain the biological component for each gene. ```{r} dec <- modelGeneVar(sce) plot(dec$mean, dec$total, xlab="Mean log-expression", ylab="Variance") curve(metadata(dec)$trend(x), col="blue", add=TRUE) ``` If we have spike-ins, we can use them to fit the trend instead. This provides a more direct estimate of the technical variance and avoids assuming that most genes do not exhibit biological variaility. ```{r} dec2 <- modelGeneVarWithSpikes(sce, 'ERCC') plot(dec2$mean, dec2$total, xlab="Mean log-expression", ylab="Variance") points(metadata(dec2)$mean, metadata(dec2)$var, col="red") curve(metadata(dec2)$trend(x), col="blue", add=TRUE) ``` If we have some uninteresting factors of variation, we can block on these using `block=`. This will perform the trend fitting and decomposition within each block before combining the statistics across blocks for output. Statistics for each individual block can also be extracted for further inspection. ```{r, fig.wide=TRUE, fig.asp=1.5} dec3 <- modelGeneVar(sce, block=sce$donor) per.block <- dec3$per.block par(mfrow=c(3, 2)) for (i in seq_along(per.block)) { decX <- per.block[[i]] plot(decX$mean, decX$total, xlab="Mean log-expression", ylab="Variance", main=names(per.block)[i]) curve(metadata(decX)$trend(x), col="blue", add=TRUE) } ``` We can then extract some top genes for use in downstream procedures. This is usually done by passing the selected subset of genes to the `subset.row` argument (or equivalent) in the desired downstream function, as shown below for the PCA step. ```{r} # Get the top 10% of genes. top.hvgs <- getTopHVGs(dec, prop=0.1) # Get the top 2000 genes. top.hvgs2 <- getTopHVGs(dec, n=2000) # Get all genes with positive biological components. top.hvgs3 <- getTopHVGs(dec, var.threshold=0) # Get all genes with FDR below 5%. top.hvgs4 <- getTopHVGs(dec, fdr.threshold=0.05) # Running the PCA with the 10% of HVGs. sce <- runPCA(sce, subset_row=top.hvgs) reducedDimNames(sce) ``` # Automated PC choice Principal components analysis is commonly performed to denoise and compact the data prior to downstream analysis. A common question is how many PCs to retain; more PCs will capture more biological signal at the cost of retaining more noise and requiring more computational work. One approach to choosing the number of PCs is to use the technical component estimates to determine the proportion of variance that should be retained. This is implemented in `denoisePCA()`, which takes the estimates returned by `modelGeneVar()` or friends. (For greater accuracy, we use the fit with the spikes; we also subset to only the top HVGs to remove noise.) ```{r} sced <- denoisePCA(sce, dec2, subset.row=getTopHVGs(dec2, prop=0.1)) ncol(reducedDim(sced, "PCA")) ``` Another approach is based on the assumption that each subpopulation should be separated from each other on a different axis of variation. Thus, we choose the number of PCs that is not less than the number of subpopulations (which are unknown, of course, so we use the number of clusters as a proxy). It is then a simple to subset the dimensionality reduction result to the desired number of PCs. ```{r} output <- getClusteredPCs(reducedDim(sce)) npcs <- metadata(output)$chosen reducedDim(sce, "PCAsub") <- reducedDim(sce, "PCA")[,1:npcs,drop=FALSE] npcs ``` # Graph-based clustering Clustering of scRNA-seq data is commonly performed with graph-based methods due to their relative scalability and robustness. `r Biocpkg('scran')` provides several graph construction methods based on shared nearest neighbors [@xu2015identification] through the `buildSNNGraph()` function. This is most commonly generated from the selected PCs, after which methods from the `r CRANpkg("igraph")` package can be used to identify clusters. ```{r} # In this case, using the PCs that we chose from getClusteredPCs(). g <- buildSNNGraph(sce, use.dimred="PCAsub") cluster <- igraph::cluster_walktrap(g)$membership sce$cluster <- factor(cluster) table(sce$cluster) ``` We can then use methods from `r Biocpkg("scater")` to visualize this on a $t$-SNE plot, as shown below. ```{r} sce <- runTSNE(sce, dimred="PCAsub") plotTSNE(sce, colour_by="cluster", text_by="cluster") ``` Another diagnostic is to examine the ratio of observed to expected edge weights for each pair of clusters (closely related to the modularity score used in many `cluster_*` functions). We would usually expect to see high observed weights between cells in the same cluster with minimal weights between clusters, indicating that the clusters are well-separated. Off-diagonal entries indicate that some clusters are closely related, which is useful to know for checking that they are annotated consistently. ```{r} ratio <- clusterModularity(g, cluster, as.ratio=TRUE) library(pheatmap) pheatmap(log10(ratio+1), cluster_cols=FALSE, cluster_rows=FALSE, col=rev(heat.colors(100))) ``` By default, `buildSNNGraph()` uses the mode of shared neighbor weighting described by @xu2015identification, but other weighting methods (e.g., the Jaccard index) are also available by setting `type=`. An unweighted $k$-nearest neighbor graph can also be constructed with `buildKNNGraph()`. # Identifying marker genes The `findMarkers()` wrapper function will perform some simple differential expression tests between pairs of clusters to identify potential marker genes for each cluster. For each cluster, we perform $t$-tests to identify genes that are DE in each cluster compared to at least one other cluster. All pairwise tests are combined into a single ranking by simply taking the top genes from each pairwise comparison. For example, if we take all genes with `Top <= 5`, this is equivalent to the union of the top 5 genes from each pairwise comparison. This aims to provide a set of genes that is guaranteed to be able to distinguish the chosen cluster from all others. ```{r} markers <- findMarkers(sce, sce$cluster) markers[[1]][,1:3] ``` We can modify the tests by passing a variety of arguments to `findMarkers()`. For example, the code below will perform Wilcoxon tests instead of $t$-tests; only identify genes that are upregulated in the target cluster compared to each other cluster; and require a minimum log~2~-fold change of 1 to be considered significant. ```{r} wmarkers <- findMarkers(sce, sce$cluster, test.type="wilcox", direction="up", lfc=1) wmarkers[[1]][,1:3] ``` We can also modify how the statistics are combined across pairwise comparisons. Setting `pval.type="all"` requires a gene to be DE between each cluster and _every_ other cluster (rather than _any_ other cluster, as is the default with `pval.type="any"`). This is a more stringent definition that can yield a more focused set of markers but may also fail to detect any markers in the presence of overclustering. ```{r} markers <- findMarkers(sce, sce$cluster, pval.type="all") markers[[1]][,1:2] ``` # Detecting correlated genes Another useful procedure is to identify significant pairwise correlations between pairs of HVGs. The idea is to distinguish between HVGs caused by random stochasticity, and those that are driving systematic heterogeneity, e.g., between subpopulations. Correlations are computed in the `correlatePairs` method using a slightly modified version of Spearman's rho. Testing is performed against the null hypothesis of independent genes, using a permutation method in `correlateNull` to construct a null distribution. ```{r} # Using the first 200 HVs, which are the most interesting anyway. of.interest <- top.hvgs[1:200] cor.pairs <- correlatePairs(sce, subset.row=of.interest) cor.pairs ``` As with variance estimation, if uninteresting substructure is present, this should be blocked on using the `block=` argument in both `correlateNull` and `correlatePairs`. This avoids strong correlations due to the blocking factor. ```{r} cor.pairs2 <- correlatePairs(sce, subset.row=of.interest, block=sce$donor) ``` The pairs can be used for choosing marker genes in experimental validation, and to construct gene-gene association networks. In other situations, the pairs may not be of direct interest - rather, we just want to know whether a gene is correlated with any other gene. This is often the case if we are to select a set of correlated HVGs for use in downstream steps like clustering or dimensionality reduction. To do so, we use `correlateGenes()` to compute a single set of statistics for each gene, rather than for each pair. ```{r} cor.genes <- correlateGenes(cor.pairs) cor.genes ``` Significant correlations are defined at a false discovery rate (FDR) threshold of, e.g., 5%. Note that the p-values are calculated by permutation and will have a lower bound. If there were insufficient permutation iterations, a warning will be issued suggesting that more iterations be performed. # Converting to other formats The `SingleCellExperiment` object can be easily converted into other formats using the `convertTo` method. This allows analyses to be performed using other pipelines and packages. For example, if DE analyses were to be performed using `r Biocpkg("edgeR")`, the count data in `sce` could be used to construct a `DGEList`. ```{r} y <- convertTo(sce, type="edgeR") ``` By default, rows corresponding to spike-in transcripts are dropped when `get.spikes=FALSE`. As such, the rows of `y` may not correspond directly to the rows of `sce` -- users should match by row name to ensure correct cross-referencing between objects. Normalization factors are also automatically computed from the size factors. The same conversion strategy roughly applies to the other supported formats. DE analyses can be performed using `r Biocpkg("DESeq2")` by converting the object to a `DESeqDataSet`. Cells can be ordered on pseudotime with `r Biocpkg("monocle")` by converting the object to a `CellDataSet` (in this case, normalized _unlogged_ expression values are stored). # Getting help Further information can be obtained by examining the documentation for each function (e.g., `?convertTo`); reading the [Orchestrating Single Cell Analysis](https://osca.bioconductor.org) book; or asking for help on the Bioconductor [support site](http://support.bioconductor.org) (please read the [posting guide](http://www.bioconductor.org/help/support/posting-guide) beforehand). # Session information ```{r} sessionInfo() ``` # References