This repository has been archived by the owner on Oct 12, 2020. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 2
/
Copy pathall-in-R.Rmd
135 lines (102 loc) · 4.2 KB
/
all-in-R.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
---
title: "Doing all your analysis in R"
author: "Florian Privé"
date: "October 21, 2017"
output: html_document
bibliography: refs.bib
editor_options:
chunk_output_type: console
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(fig.align = "center", out.width = "65%", fig.asp = 0.75)
options(width = 85)
```
A biological analysis is sometimes more appropriately called a pipeline. This is because it generally consists of many steps, using many different software and data formats. Yet, these analysis pipelines are becoming very complex and usually makes use of bash/perl scripts. For people like me who don't really know that much bash or perl, it can be really hard to understand those scripts.
What is important in these pipelines? To list what comes to my mind:
- use the command line
- manipulate files
- use regular expressions
- visualize results
- report results
I think we could do each of these operations in R.
And I think we should.
The main reason would be to put all your analysis in a single notebook where you have all your code, results and possibly some writing. Using notebooks is good practice and makes it possible to have a **fully reproducible analysis**, which will become a standard in years to come. Another reason is simply that it is easier!
In this tutorial, I show an example of a moderately complex analysis of the HapMap3 data (phase III), all in R.
## Example with HapMap3
### Get the data
Do not just give a link to the data, use an R command to download the data directly. Mouse clicks prevent reproducibility.
```{r}
# Get the URL of the data
site <- "https://ftp.ncbi.nlm.nih.gov/hapmap/genotypes/2009-01_phaseIII/plink_format/"
name <- "hapmap3_r2_b36_fwd.qc.poly"
ext <- ".tar.bz2"
```
```{r, eval=FALSE}
# Download the file
download.file(paste0(site, name, ext), tmp <- tempfile(fileext = ext))
# Uncompress the downloaded file in directory data/
untar(tmp, exdir = "data")
```
### Use PLINK to combine datasets
PLINK is a very efficient command-line software very useful for preprocessing of genome-wide data [@chang2015second; @purcell2007plink].
```{r}
# devtools::install_github("privefl/bigsnpr")
plink <- bigsnpr::download_plink()
```
```{r}
files <- rev(list.files("data/hapmap3_pop", full.names = TRUE))
write(files, tmp <- tempfile(), ncolumns = 2)
library(glue)
system(glue("{plink} --merge-list {tmp} --out data/hapmap3"))
```
### Use PLINK for quality control
```{r}
system(glue(
"{plink} --bfile data/hapmap3",
" --maf 0.05",
" --geno 0.05",
" --hwe 1e-10",
" --autosome",
" --make-bed",
" --out data/hapmap3_qc"
))
```
There exists much more quality control steps but it is not the objective of this tutorial. For details, please refer to [@anderson2010data].
### Get a pruned dataset
First, we need to remove long-range LD regions and use pruning.
```{r}
# Write long-range LD regions in a file
bigsnpr:::write.table2(bigsnpr::LD.wiki34, tmp <- tempfile())
# Get pruned SNPs
system(glue("{plink} --bfile data/hapmap3_qc",
" --exclude {tmp} --range",
" --indep-pairwise 50 5 0.2",
" --out {tmp}"))
```
### Compute Principal Components
```{r PCA}
# Compute first PCs based on pruned SNPs using PLINK
system(glue("{plink} --bfile data/hapmap3_qc",
" --extract {tmp}.prune.in",
" --pca",
" --out {tmp}"))
```
```{r}
# Get result and plot it (data.table is faster for reading/writing files)
readLines(glue("{tmp}.eigenvec"), n = 3)
evec <- data.table::fread(glue("{tmp}.eigenvec"), data.table = FALSE)
# Get population info
txt <- "relationships_w_pops_121708.txt"
download.file(paste0(site, txt), paste0("data/", txt))
info <- data.table::fread(paste0("data/", txt), data.table = FALSE)
fam <- data.table::fread("data/hapmap3_qc.fam", data.table = FALSE)
fam <- dplyr::left_join(fam, info, by = c("V1" = "FID", "V2" = "IID"))
library(ggplot2)
ggplot(evec) +
bigstatsr::theme_bigstatsr() +
geom_point(aes(V3, V4, color = fam$population)) +
labs(x = "PC1", y = "PC2", color = "Population")
```
## Discussion
It would be easy to make functions that implements whole procedures in R thanks to system calls and package glue. This could make pipelines more understandable and fully reproducible.
## References