-
Notifications
You must be signed in to change notification settings - Fork 20
/
README.Rmd
275 lines (202 loc) · 7.91 KB
/
README.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
---
output:
github_document:
html_preview: false
---
<!-- README.md and index.md are generated from README.Rmd. Please edit that file. -->
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
fig.path = "man/figures/README-",
out.width = "100%"
)
pkgload::load_all()
set.seed(20230702)
clean_output <- function(x, options) {
x <- gsub("0x[0-9a-f]+", "0xdeadbeef", x)
x <- gsub("dataframe_[0-9]*_[0-9]*", " dataframe_42_42 ", x)
x <- gsub("[0-9]*\\.___row_number ASC", "42.___row_number ASC", x)
index <- x
index <- gsub("─", "-", index)
index <- strsplit(paste(index, collapse = "\n"), "\n---\n")[[1]][[2]]
writeLines(index, "index.md")
x <- fansi::strip_sgr(x)
x
}
options(
cli.num_colors = 256,
cli.width = 80,
width = 80,
pillar.bold = TRUE
)
local({
hook_source <- knitr::knit_hooks$get("document")
knitr::knit_hooks$set(document = clean_output)
})
Sys.setenv(DUCKPLYR_META_SKIP = TRUE)
```
# duckplyr <a href="https://duckplyr.tidyverse.org"><img src="man/figures/logo.png" align="right" height="138" /></a>
<!-- badges: start -->
[![Lifecycle: experimental](https://img.shields.io/badge/lifecycle-experimental-orange.svg)](https://lifecycle.r-lib.org/articles/stages.html#experimental)
[![R-CMD-check](https://github.com/tidyverse/duckplyr/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/tidyverse/duckplyr/actions/workflows/R-CMD-check.yaml)
[![Codecov test coverage](https://codecov.io/gh/tidyverse/duckplyr/graph/badge.svg)](https://app.codecov.io/gh/tidyverse/duckplyr)
<!-- badges: end -->
> A **drop-in replacement** for dplyr, powered by DuckDB for **fast operation**.
[dplyr](https://dplyr.tidyverse.org/) is the grammar of data manipulation in the tidyverse.
The duckplyr package will run all of your existing dplyr code with identical results, using [DuckDB](https://duckdb.org/) where possible to compute the results faster.
In addition, you can analyze larger-than-memory datasets straight from files on your disk or from the web.
If you are new to dplyr, the best place to start is the [data transformation chapter](https://r4ds.hadley.nz/data-transform) in R for Data Science.
## Installation
Install duckplyr from CRAN with:
``` r
install.packages("duckplyr")
```
You can also install the development version of duckplyr from [R-universe](https://tidyverse.r-universe.dev/builds):
``` r
install.packages("duckplyr", repos = c("https://tidyverse.r-universe.dev", "https://cloud.r-project.org"))
```
Or from [GitHub](https://github.com/) with:
``` r
# install.packages("pak")
pak::pak("tidyverse/duckplyr")
```
## Drop-in replacement for dplyr
Calling `library(duckplyr)` overwrites dplyr methods, enabling duckplyr for the entire session.
```{r attach}
library(conflicted)
library(duckplyr)
```
```{r echo = FALSE}
# Needed exactly because we use pkgload::load_all() above
# and because we want to show the "Overwriting" message
methods_overwrite()
Sys.setenv(DUCKPLYR_FALLBACK_COLLECT = 0)
```
```{r dplyr}
conflict_prefer("filter", "dplyr", quiet = TRUE)
```
The following code aggregates the inflight delay by year and month for the first half of the year.
We use a variant of the `nycflights13::flights` dataset that works around an incompatibility with duckplyr.
```{r}
flights_df()
out <-
flights_df() %>%
filter(!is.na(arr_delay), !is.na(dep_delay)) %>%
mutate(inflight_delay = arr_delay - dep_delay) %>%
summarize(
.by = c(year, month),
mean_inflight_delay = mean(inflight_delay),
median_inflight_delay = median(inflight_delay),
) %>%
filter(month <= 6)
```
The result is a plain tibble:
```{r}
class(out)
```
Nothing has been computed yet.
Querying the number of rows, or a column, starts the computation:
```{r}
out$month
```
Note that, unlike dplyr, the results are not ordered, see `?config` for details.
However, once materialized, the results are stable:
```{r}
out
```
Restart R, or call `duckplyr::methods_restore()` to revert to the default dplyr implementation.
```{r}
duckplyr::methods_restore()
```
## Analyzing larger-than-memory data
An extended variant of this dataset is also available for download as Parquet files.
```{r}
year <- 2022:2024
base_url <- "https://blobs.duckdb.org/flight-data-partitioned/"
files <- paste0("Year=", year, "/data_0.parquet")
urls <- paste0(base_url, files)
urls
```
Using the httpfs DuckDB extension, we can query these files directly from R, without even downloading them first.
```{r}
duck_exec("INSTALL httpfs")
duck_exec("LOAD httpfs")
flights <- duck_parquet(urls)
```
Unlike with local data frames, the default is to disallow automatic materialization of the results on access.
```{r error = TRUE}
nrow(flights)
```
Queries on the remote data are executed lazily, and the results are not materialized until explicitly requested.
For printing, only the first few rows of the result are fetched.
```{r cache = TRUE}
flights
```
```{r cache = TRUE}
flights |>
count(Year)
```
Complex queries can be executed on the remote data.
Note how only the relevant columns are fetched and the 2024 data isn't even touched, as it's not needed for the result.
```{r cache = TRUE}
out <-
flights |>
filter(!is.na(DepDelay), !is.na(ArrDelay)) |>
mutate(InFlightDelay = ArrDelay - DepDelay) |>
summarize(
.by = c(Year, Month),
MeanInFlightDelay = mean(InFlightDelay),
MedianInFlightDelay = median(InFlightDelay),
) |>
filter(Year < 2024)
out |>
explain()
out |>
print() |>
system.time()
```
Over 10M rows analyzed in about 10 seconds over the internet, that's not bad.
Of course, working with Parquet, CSV, or JSON files downloaded locally is possible as well.
## Using duckplyr in other packages
Refer to `vignette("developers", package = "duckplyr")`.
## Telemetry
As a drop-in replacement for dplyr, duckplyr will use DuckDB for the operations only if it can, and fall back to dplyr otherwise.
A fallback will not change the correctness of the results, but it may be slower or consume more memory.
We would like to guide our efforts towards improving duckplyr, focusing on the features with the most impact.
To this end, duckplyr collects and uploads telemetry data about fallback situations, but only if permitted by the user:
- Collection is on by default, but can be turned off.
- Uploads are done upon request only.
- There is an option to automatically upload when the package is loaded, this is also opt-in.
The data collected contains:
- The package version
- The error message
- The operation being performed, and the arguments
- For the input data frames, only the structure is included (column types only), no column names or data
```{r include = FALSE}
Sys.setenv(DUCKPLYR_FALLBACK_COLLECT = "")
Sys.setenv(DUCKPLYR_FALLBACK_AUTOUPLOAD = "")
fallback_purge()
```
Fallback is silent by default, but can be made verbose.
```{r}
Sys.setenv(DUCKPLYR_FALLBACK_INFO = TRUE)
out <-
nycflights13::flights %>%
duckplyr::as_duck_tbl() %>%
mutate(inflight_delay = arr_delay - dep_delay)
```
After logs have been collected, the upload options are displayed the next time the duckplyr package is loaded in an R session.
```{r, echo = FALSE}
fallback_autoupload()
```
The `fallback_sitrep()` function describes the current configuration and the available options.
## How is this different from dbplyr?
The duckplyr package is a dplyr backend that uses DuckDB, a high-performance, embeddable analytical database.
It is designed to be a fully compatible drop-in replacement for dplyr, with *exactly* the same syntax and semantics:
- Input and output are data frames or tibbles.
- All dplyr verbs are supported, with fallback.
- All R data types and functions are supported, with fallback.
- No SQL is generated.
The dbplyr package is a dplyr backend that connects to SQL databases, and is designed to work with various databases that support SQL, including DuckDB.
Data must be copied into and collected from the database, and the syntax and semantics are similar but not identical to plain dplyr.