-
Notifications
You must be signed in to change notification settings - Fork 55
/
Copy pathappendix.Rmd
499 lines (416 loc) · 28.3 KB
/
appendix.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
```{r include=FALSE, eval=TRUE}
knitr::opts_chunk$set(eval = FALSE)
source("r/render.R")
```
# Appendix
## Preface {#appendix-preface}
### Formatting {#appendix-ggplot2-theme}
The following `ggplot2` theme was use to format plots in this book:
```{r appendix-ggplot2-theme-code}
plot_style <- function() {
font <- "Helvetica"
ggplot2::theme_classic() +
ggplot2::theme(
plot.title = ggplot2::element_text(
family = font, size=14, color = "#222222"),
plot.subtitle = ggplot2::element_text(
family=font, size=12, color = "#666666"),
legend.position = "right",
legend.background = ggplot2::element_blank(),
legend.title = ggplot2::element_blank(),
legend.key = ggplot2::element_blank(),
legend.text = ggplot2::element_text(
family=font, size=14, color="#222222"),
axis.title.y = ggplot2::element_text(
margin = ggplot2::margin(t = 0, r = 8, b = 0, l = 0),
size = 14, color="#666666"),
axis.title.x = ggplot2::element_text(
margin = ggplot2::margin(t = -2, r = 0, b = 0, l = 0),
size = 14, color = "#666666"),
axis.text = ggplot2::element_text(
family=font, size=14, color="#222222"),
axis.text.x = ggplot2::element_text(
margin = ggplot2::margin(5, b = 10)),
axis.ticks = ggplot2::element_blank(),
axis.line = ggplot2::element_blank(),
panel.grid.minor = ggplot2::element_blank(),
panel.grid.major.y = ggplot2::element_line(color = "#eeeeee"),
panel.grid.major.x = ggplot2::element_line(color = "#ebebeb"),
panel.background = ggplot2::element_blank(),
strip.background = ggplot2::element_rect(fill = "white"),
strip.text = ggplot2::element_text(size = 20, hjust = 0)
)
}
```
Which you can then active with:
```{r appendix-ggplot2-theme-activate}
ggplot2::theme_set(plot_style())
```
## Introduction {#appendix-intro}
### Worlds Store Capacity {#appendix-storage-capacity}
The following script was used to generate the worlds store capacity diagram:
```{r eval=FALSE}
library(ggplot2)
library(dplyr)
library(tidyr)
read.csv("data/01-worlds-capacity-to-store-information.csv", skip = 8) %>%
gather(key = storage, value = capacity, analog, digital) %>%
mutate(year = X, terabytes = capacity / 1e+12) %>%
ggplot(aes(x = year, y = terabytes, group = storage)) +
geom_line(aes(linetype = storage)) +
geom_point(aes(shape = storage)) +
scale_y_log10(
breaks = scales::trans_breaks("log10", function(x) 10^x),
labels = scales::trans_format("log10", scales::math_format(10^x))
) +
theme_light() +
theme(legend.position = "bottom")
```
### Daily downloads of CRAN packages {#appendix-cran-downloads}
The CRAN downloads chart was generated through:
```{r eval=FALSE}
downloads_csv <- "data/01-intro-r-cran-downloads.csv"
if (!file.exists(downloads_csv)) {
downloads <- cranlogs::cran_downloads(from = "2014-01-01", to = "2019-01-01")
readr::write_csv(downloads, downloads_csv)
}
cran_downloads <- readr::read_csv(downloads_csv)
ggplot(cran_downloads, aes(date, count)) +
labs(title = "CRAN Packages",
subtitle = "Total daily downloads over time") +
geom_point(colour="black", pch = 21, size = 1) +
scale_x_date() + xlab("year") + ylab("downloads") +
scale_x_date(date_breaks = "1 year",
labels = function(x) substring(x, 1, 4)) +
scale_y_continuous(
limits = c(0, 3.5 * 10^6),
breaks = c(0.5 * 10^6, 10^6, 1.5 * 10^6, 2 * 10^6, 2.5 * 10^6, 3 * 10^6, 3.5 * 10^6),
labels = c("", "1M", "", "2M", "", "3M", "")
)
```
## Getting Started {#appendix-starting}
### Prerequisites {#appendix-prerequisites}
#### Installing R {#appendix-install-r}
From [r-project.org](https://r-project.org/), download and launch the R installer for your platform, Windows, Macs or Linux available.
```{r appendix-r-download, eval=TRUE, echo=FALSE, fig.width=4, fig.align='center', fig.cap='The R Project for Statistical Computing'}
render_image("images/appendix-download-r.png", "The R Project for Statistical Computing")
```
#### Installing Java {#appendix-install-java}
From [java.com/download](https://java.com/download), download and launch the installer for your platform, Windows, Macs or Linux are also available.
```{r appendix-java-download, eval=TRUE, fig.width=4, fig.align='center', echo=FALSE, fig.cap='Java Download Page'}
render_image("images/appendix-download-java.png", "Java Download Page")
```
Starting with Spark 2.1, Java 8 is required; however, previous versions of Spark support Java 7. Regardless, we recommend installing Java Runtime Engine 8, or JRE 8 for short.
**Note:** For advanced readers that are already using the Java Development Kit, JDK for short. Please notice that JDK 9+ is currently unsupported so you will need to downgrade to JDK 8 by uninstalling JDK 9+ or by setting `JAVA_HOME` appropiately.
#### Installing RStudio {#appendix-install-rstudio}
While installing RStudio is not strictly required to work with Spark with R, it will make you much more productive and therefore, I would recommend you take the time to install RStudio from [rstudio.com/download](https://www.rstudio.com/download), then download and launch the installer for your platform: Windows, Macs or Linux.
```{r appendix-rstudio-download, eval=TRUE, fig.width=4, fig.align='center', echo=FALSE, fig.cap='RStudio Downloads Page'}
render_image("images/appendix-rstudio.png", "RStudio Downloads Page")
```
After launching RStudio, you can use RStudio's console panel to execute the code provided in this chapter.
#### Using RStudio {#appendix-using-rstudio}
If you are not familiar with RStudio, you should make note of the following panes:
- *Console*: A standalone R console you can use to execute all the code presented in this book.
- *Packages*: This pane allows you to install `sparklyr` with ease, check its version, navigate to the help contents, etc.
- *Connections*: This pane allows you to connecto to Spark, manage your active connection and view the available datasets.
```{r appendix-rstudio-overview, eval=TRUE, fig.width=4, fig.align='center', echo=FALSE, fig.cap='RStudio Overview'}
render_image("images/appendix-rstudio-overview.png", "RStudio Overview")
```
## Analysis {#appendix-analysis}
### Hive Functions {#hive-functions}
Name | Description
-----|----------------
size(Map<K.V>) | Returns the number of elements in the map type.
size(Array<T>) | Returns the number of elements in the array type.
map_keys(Map<K.V>) | Returns an unordered array containing the keys of the input map.
map_values(Map<K.V>) | Returns an unordered array containing the values of the input map.
array_contains(Array<T>, value) | Returns TRUE if the array contains value.
sort_array(Array<T>) | Sorts the input array in ascending order according to the natural ordering of the array elements and returns it
binary(string or binary) | Casts the parameter into a binary.
cast(expr as 'type') | Converts the results of the expression expr to the given type.
from_unixtime(bigint unixtime[, string format]) | Converts the number of seconds from unix epoch (1970-01-01 00:00:00 UTC) to a string.
unix_timestamp() | Gets current Unix timestamp in seconds.
unix_timestamp(string date) | Converts time string in format yyyy-MM-dd HH:mm:ss to Unix timestamp (in seconds).
to_date(string timestamp) | Returns the date part of a timestamp string.
year(string date) | Returns the year part of a date.
quarter(date/timestamp/string) | Returns the quarter of the year for a date.
month(string date) | Returns the month part of a date or a timestamp string.
day(string date) dayofmonth(date) | Returns the day part of a date or a timestamp string.
hour(string date) | Returns the hour of the timestamp.
minute(string date) | Returns the minute of the timestamp.
second(string date) | Returns the second of the timestamp.
weekofyear(string date) | Returns the week number of a timestamp string.
extract(field FROM source) | Retrieve fields such as days or hours from source. Source must be a date, timestamp, interval or a string that can be converted into either a date or timestamp.
datediff(string enddate, string startdate) | Returns the number of days from startdate to enddate.
date_add(date/timestamp/string startdate, tinyint/smallint/int days) | Adds a number of days to startdate.
date_sub(date/timestamp/string startdate, tinyint/smallint/int days) | Subtracts a number of days to startdate.
from_utc_timestamp({any primitive type} ts, string timezone) | Converts a timestamp in UTC to a given timezone.
to_utc_timestamp({any primitive type} ts, string timezone) | Converts a timestamp in a given timezone to UTC.
current_date | Returns the current date.
current_timestamp | Returns the current timestamp.
add_months(string start_date, int num_months, output_date_format) | Returns the date that is num_months after start_date.
last_day(string date) | Returns the last day of the month which the date belongs to.
next_day(string start_date, string day_of_week) | Returns the first date which is later than start_date and named as day_of_week.
trunc(string date, string format) | Returns date truncated to the unit specified by the format.
months_between(date1, date2) | Returns number of months between dates date1 and date2.
date_format(date/timestamp/string ts, string fmt) | Converts a date/timestamp/string to a value of string in the format specified by the date format fmt.
if(boolean testCondition, T valueTrue, T valueFalseOrNull) | Returns valueTrue when testCondition is true, returns valueFalseOrNull otherwise.
isnull( a ) | Returns true if a is NULL and false otherwise.
isnotnull ( a ) | Returns true if a is not NULL and false otherwise.
nvl(T value, T default_value) | Returns default value if value is null else returns value.
COALESCE(T v1, T v2, ...) | Returns the first v that is not NULL, or NULL if all v's are NULL.
CASE a WHEN b THEN c [WHEN d THEN e]* [ELSE f] END | When a = b, returns c; when a = d, returns e; else returns f.
nullif( a, b ) | Returns NULL if a=b; otherwise returns a.
assert_true(boolean condition) | Throw an exception if 'condition' is not true, otherwise return null.
ascii(string str) | Returns the numeric value of the first character of str.
base64(binary bin) | Converts the argument from binary to a base 64 string.
character_length(string str) | Returns the number of UTF-8 characters contained in str.
chr(bigint|double A) | Returns the ASCII character having the binary equivalent to A.
concat(string|binary A, string|binary B...) | Returns the string or bytes resulting from concatenating the strings or bytes passed in as parameters in order.
context_ngrams(array<array<string>>, array<string>, int K, int pf) | Returns the top-k contextual N-grams from a set of tokenized sentences.
concat_ws(string SEP, string A, string B...) | Like concat() above, but with custom separator SEP.
decode(binary bin, string charset) | Decodes the first argument into a String using the provided character set (one of 'US-ASCII', 'ISO-8859-1', 'UTF-8', 'UTF-16BE', 'UTF-16LE', 'UTF-16'). If either argument is null, the result will also be null.
elt(N int,str1 string,str2 string,str3 string,...) | Return string at index number, elt(2,'hello','world') returns 'world'.
encode(string src, string charset) | Encodes the first argument into a BINARY using the provided character set (one of 'US-ASCII', 'ISO-8859-1', 'UTF-8', 'UTF-16BE', 'UTF-16LE', 'UTF-16').
field(val T,val1 T,val2 T,val3 T,...) | Returns the index of val in the val1,val2,val3,... list or 0 if not found.
find_in_set(string str, string strList) | Returns the first occurance of str in strList where strList is a comma-delimited string.
format_number(number x, int d) | Formats the number X to a format like `'#,###,###.##'`, rounded to D decimal places, and returns the result as a string. If D is 0, the result has no decimal point or fractional part.
get_json_object(string json_string, string path) | Extracts json object from a json string based on json path specified, and returns json string of the extracted json object.
in_file(string str, string filename) | Returns true if the string str appears as an entire line in filename.
instr(string str, string substr) | Returns the position of the first occurrence of substr in str.
length(string A) | Returns the length of the string.
locate(string substr, string str[, int pos]) | Returns the position of the first occurrence of substr in str after position pos.
lower(string A) lcase(string A) | Returns the string resulting from converting all characters of B to lower case.
lpad(string str, int len, string pad) | Returns str, left-padded with pad to a length of len. If str is longer than len, the return value is shortened to len characters.
ltrim(string A) | Returns the string resulting from trimming spaces from the beginning(left hand side) of A.
ngrams(array<array<string>>, int N, int K, int pf) | Returns the top-k N-grams from a set of tokenized sentences, such as those returned by the sentences() UDAF.
octet_length(string str) | Returns the number of octets required to hold the string str in UTF-8 encoding.
parse_url(string urlString, string partToExtract [, string keyToExtract]) | Returns the specified part from the URL. Valid values for partToExtract include HOST, PATH, QUERY, REF, PROTOCOL, AUTHORITY, FILE, and USERINFO.
printf(String format, Obj... args) | Returns the input formatted according do printf-style format strings.
regexp_extract(string subject, string pattern, int index) | Returns the string extracted using the pattern.
regexp_replace(string INITIAL_STRING, string PATTERN, string REPLACEMENT) | Returns the string resulting from replacing all substrings in INITIAL_STRING that match the java regular expression syntax defined in PATTERN with instances of REPLACEMENT.
repeat(string str, int n) | Repeats str n times.
replace(string A, string OLD, string NEW) | Returns the string A with all non-overlapping occurrences of OLD replaced with NEW.
reverse(string A) | Returns the reversed string.
rpad(string str, int len, string pad) | Returns str, right-padded with pad to a length of len.
rtrim(string A) | Returns the string resulting from trimming spaces from the end(right hand side) of A.
sentences(string str, string lang, string locale) | Tokenizes a string of natural language text into words and sentences, where each sentence is broken at the appropriate sentence boundary and returned as an array of words.
space(int n) | Returns a string of n spaces.
split(string str, string pat) | Splits str around pat (pat is a regular expression).
str_to_map(text[, delimiter1, delimiter2]) | Splits text into key-value pairs using two delimiters. Delimiter1 separates text into K-V pairs, and Delimiter2 splits each K-V pair. Default delimiters are ',' for delimiter1 and ':' for delimiter2.
substr(string|binary A, int start) | Returns the substring or slice of the byte array of A starting from start position till the end of string A. For example, substr('foobar', 4) results in 'bar'
substring_index(string A, string delim, int count) | Returns the substring from string A before count occurrences of the delimiter delim.
translate(string|char|varchar input, string|char|varchar from, string|char|varchar to) | Translates the input string by replacing the characters present in the from string with the corresponding characters in the to string.
trim(string A) | Returns the string resulting from trimming spaces from both ends of A.
unbase64(string str) | Converts the argument from a base 64 string to BINARY.
upper(string A) ucase(string A) | Returns the string resulting from converting all characters of A to upper case. For example, upper('fOoBaR') results in 'FOOBAR'.
initcap(string A) | Returns string, with the first letter of each word in uppercase, all other letters in lowercase. Words are delimited by whitespace.
levenshtein(string A, string B) | Returns the Levenshtein distance between two strings.
soundex(string A) | Returns soundex code of the string.
mask(string str[, string upper[, string lower[, string number]]]) | Returns a masked version of str.
mask_first_n(string str[, int n]) | Returns a masked version of str with the first n values masked. mask_first_n("1234-5678-8765-4321", 4) results in nnnn-5678-8765-4321.
mask_last_n(string str[, int n]) | Returns a masked version of str with the last n values masked.
mask_show_first_n(string str[, int n]) | Returns a masked version of str, showing the first n characters unmasked.
mask_show_last_n(string str[, int n]) | Returns a masked version of str, showing the last n characters unmasked.
mask_hash(string|char|varchar str) | Returns a hashed value based on str.
java_method(class, method[, arg1[, arg2..]]) | Synonym for reflect.
reflect(class, method[, arg1[, arg2..]]) | Calls a Java method by matching the argument signature, using reflection.
hash(a1[, a2...]) | Returns a hash value of the arguments.
current_user() | Returns current user name from the configured authenticator manager.
logged_in_user() | Returns current user name from the session state.
current_database() | Returns current database name.
md5(string/binary) | Calculates an MD5 128-bit checksum for the string or binary.
sha1(string/binary)sha(string/binary) | Calculates the SHA-1 digest for string or binary and returns the value as a hex string.
crc32(string/binary) | Computes a cyclic redundancy check value for string or binary argument and returns bigint value.
sha2(string/binary, int) | Calculates the SHA-2 family of hash functions (SHA-224, SHA-256, SHA-384, and SHA-512).
aes_encrypt(input string/binary, key string/binary) | Encrypt input using AES.
aes_decrypt(input binary, key string/binary) | Decrypt input using AES.
version() | Returns the Hive version.
count(expr) | Returns the total number of retrieved rows.
sum(col), sum(DISTINCT col) | Returns the sum of the elements in the group or the sum of the distinct values of the column in the group.
avg(col), avg(DISTINCT col) | Returns the average of the elements in the group or the average of the distinct values of the column in the group.
min(col) | Returns the minimum of the column in the group.
max(col) | Returns the maximum value of the column in the group.
variance(col), var_pop(col) | Returns the variance of a numeric column in the group.
var_samp(col) | Returns the unbiased sample variance of a numeric column in the group.
stddev_pop(col) | Returns the standard deviation of a numeric column in the group.
stddev_samp(col) | Returns the unbiased sample standard deviation of a numeric column in the group.
covar_pop(col1, col2) | Returns the population covariance of a pair of numeric columns in the group.
covar_samp(col1, col2) | Returns the sample covariance of a pair of a numeric columns in the group.
corr(col1, col2) | Returns the Pearson coefficient of correlation of a pair of a numeric columns in the group.
percentile(BIGINT col, p) | Returns the exact pth percentile of a column in the group (does not work with floating point types). p must be between 0 and 1.
percentile(BIGINT col, array(p1 [, p2]...)) | Returns the exact percentiles p1, p2, ... of a column in the group. pi must be between 0 and 1.
percentile_approx(DOUBLE col, p [, B]) | Returns an approximate pth percentile of a numeric column (including floating point types) in the group. The B parameter controls approximation accuracy at the cost of memory. Higher values yield better approximations, and the default is 10,000. When the number of distinct values in col is smaller than B, this gives an exact percentile value.
percentile_approx(DOUBLE col, array(p1 [, p2]...) [, B]) | Same as above, but accepts and returns an array of percentile values instead of a single one.
regr_avgx(independent, dependent) | Equivalent to avg(dependent).
regr_avgy(independent, dependent) | Equivalent to avg(independent).
regr_count(independent, dependent) | Returns the number of non-null pairs used to fit the linear regression line.
regr_intercept(independent, dependent) | Returns the y-intercept of the linear regression line, i.e. the value of b in the equation dependent = a * independent + b.
regr_r2(independent, dependent) | Returns the coefficient of determination for the regression.
regr_slope(independent, dependent) | Returns the slope of the linear regression line, i.e. the value of a in the equation dependent = a * independent + b.
regr_sxx(independent, dependent) | Equivalent to regr_count(independent, dependent) * var_pop(dependent).
regr_sxy(independent, dependent) | Equivalent to regr_count(independent, dependent) * covar_pop(independent, dependent).
regr_syy(independent, dependent) | Equivalent to regr_count(independent, dependent) * var_pop(independent).
histogram_numeric(col, b) | Computes a histogram of a numeric column in the group using b non-uniformly spaced bins. The output is an array of size b of double-valued (x,y) coordinates that represent the bin centers and heights
collect_set(col) | Returns a set of objects with duplicate elements eliminated.
collect_list(col) | Returns a list of objects with duplicates.
ntile(INTEGER x) | Divides an ordered partition into x groups called buckets and assigns a bucket number to each row in the partition. This allows easy calculation of tertiles, quartiles, deciles, percentiles and other common summary statistics.
explode(ARRAY<T> a) | Explodes an array to multiple rows. Returns a row-set with a single column (col), one row for each element from the array.
explode(MAP<Tkey,Tvalue> m) | Explodes a map to multiple rows. Returns a row-set with a two columns (key,value) , one row for each key-value pair from the input map.
posexplode(ARRAY<T> a) | Explodes an array to multiple rows with additional positional column of int type (position of items in the original array, starting with 0). Returns a row-set with two columns (pos,val), one row for each element from the array.
inline(ARRAY<STRUCT<f1:T1,...,fn:Tn>> a) | Explodes an array of structs to multiple rows. Returns a row-set with N columns (N = number of top level elements in the struct), one row per struct from the array.
stack(int r,T1 V1,...,Tn/r Vn) | Breaks up n values V1,...,Vn into r rows. Each row will have n/r columns. r must be constant.
json_tuple(string jsonStr,string k1,...,string kn) | Takes JSON string and a set of n keys, and returns a tuple of n values.
parse_url_tuple(string urlStr,string p1,...,string pn) | Takes URL string and a set of n URL parts, and returns a tuple of n values.
## Modeling {#appendix-modeling}
### MLlib Functions {#appendix-ml-functionlist}
The following table exhibits the ML algorithms supported in sparklyr:
#### Classification {#appendix-classification}
Algorithm | Function
----------|---------
Decision Trees | ml_decision_tree_classifier()
Gradient-Boosted Trees | ml_gbt_classifier()
Linear Support Vector Machines | ml_linear_svc()
Logistic Regression | ml_logistic_regression()
Multilayer Perceptron | ml_multilayer_perceptron_classifier()
Naive-Bayes | ml_naive_bayes()
One vs Rest | ml_one_vs_rest()
Random Forests | ml_random_forest_classifier()
#### Regression {#appendix-regression}
Algorithm | Function
----------|---------
Accelerated Failure Time Survival Regression | ml_aft_survival_regression()
Decision Trees | ml_decision_tree_regressor()
Generalized Linear Regression | ml_generalized_linear_regression()
Gradient-Boosted Trees | ml_gbt_regressor()
Isotonic Regression | ml_isotonic_regression()
Linear Regression | ml_linear_regression()
#### Clustering {#appendix-clustering}
Algorithm | Function
----------|---------
Bisecting K-Means Clustering | ml_bisecting_kmeans()
Gaussian Mixture Clustering | ml_gaussian_mixture()
K-Means Clustering | ml_kmeans()
Latent Dirichlet Allocation | ml_lda()
#### Recommendation {#appendix-recommendation}
Algorithm | Function
----------|---------
Alternating Least Squares Factorization | ml_als()
#### Frequent Pattern Mining {#appendix-fpm}
Algorithm | Function
----------|---------
FPGrowth | ml_fpgrowth()
#### Feature Transformers {#appendix-feature-transformers}
Transformer | Function
------------|---------
Binarizer | ft_binarizer()
Bucketizer | ft_bucketizer()
Chi-Squared Feature Selector | ft_chisq_selector()
Vocabulary from Document Collections | ft_count_vectorizer()
Discrete Cosine Transform | ft_discrete_cosine_transform()
Transformation using dplyr | ft_dplyr_transformer()
Hadamard Product | ft_elementwise_product()
Feature Hasher | ft_feature_hasher()
Term Frequencies using Hashing | export(ft_hashing_tf)
Inverse Document Frequency | ft_idf()
Imputation for Missing Values | export(ft_imputer)
Index to String | ft_index_to_string()
Feature Interaction Transform | ft_interaction()
Rescale to [-1, 1] Range | ft_max_abs_scaler()
Rescale to [min, max] Range | ft_min_max_scaler()
Locality Sensitive Hashing | ft_minhash_lsh()
Converts to n-grams | ft_ngram()
Normalize using the given P-Norm | ft_normalizer()
One-Hot Encoding | ft_one_hot_encoder()
Feature Expansion in Polynomial Space | ft_polynomial_expansion()
Maps to Binned Categorical Features | ft_quantile_discretizer()
SQL Transformation | ft_sql_transformer()
Standardizes Features using Corrected STD | ft_standard_scaler()
Filters out Stop Words | ft_stop_words_remover()
Map to Label Indices | ft_string_indexer()
Splits by White Spaces | ft_tokenizer()
Combine Vectors to Row Vector | ft_vector_assembler()
Indexing Categorical Feature | ft_vector_indexer()
Subarray of the Original Feature | ft_vector_slicer()
Transform Word into Code | ft_word2vec()
## Clusters {#appendix-clusters}
### Google trends for mainframes, cloud computing and kubernetes {#appendix-cluster-trends}
Data downloaded from [https://bit.ly/2YnHkNI](https://bit.ly/2YnHkNI).
```{r eval=FALSE}
library(dplyr)
read.csv("data/clusters-trends.csv", skip = 2) %>%
mutate(year = as.Date(paste(Month, "-01", sep = ""))) %>%
mutate(`On-Premise` = `mainframe...Worldwide.`,
Cloud = `cloud.computing...Worldwide.`,
Kubernetes = `kubernetes...Worldwide.`) %>%
tidyr::gather(`On-Premise`, Cloud, Kubernetes,
key = "trend", value = "popularity") %>%
ggplot(aes(x=year, y=popularity, group=trend)) +
geom_line(aes(linetype = trend, color = trend)) +
scale_x_date(date_breaks = "2 year", date_labels = "%Y") +
labs(title = "Cluster Computing Trends",
subtitle = paste("Search popularity for on-premise (mainframe)",
"cloud computing and kubernetes ")) +
scale_color_grey(start = 0.6, end = 0.2) +
geom_hline(yintercept = 0, size = 1, colour = "#333333") +
theme(axis.title.x = element_blank())
```
## Streaming {#appendix-streaming}
### Stream Generator {#appendix-streaming-generator}
The `stream_generate_test()` function creates a local test stream. This function works independently from a Spark connection. The following example will create five files in sub-folder called "source". The files will be created one second apart from the previous file's creation.
```{r}
library(sparklyr)
stream_generate_test(iterations = 5, path = "source", interval = 1)
```
After the function completes, all of the files should show up in the "source" folder. Notice that the file size vary. This is so that it simulates what a true stream would do.
```{r}
file.info(file.path("source", list.files("source")))[1]
```
```
## size
## source/stream_1.csv 44
## source/stream_2.csv 121
## source/stream_3.csv 540
## source/stream_4.csv 2370
## source/stream_5.csv 7236
```
The `stream_generate_test()` by default will create a single numeric variable data frame.
```{r}
readr::read_csv("source/stream_5.csv")
```
```
## # A tibble: 1,489 x 1
## x
## <dbl>
## 1 630
## 2 631
## 3 632
## 4 633
## 5 634
## 6 635
## 7 636
## 8 637
## 9 638
## 10 639
## # ... with 1,479 more rows
```
### Installing Kafka {#appendix-streaming-kafka}
This instructions were put together using information from the official Kafka site. Specifically from the Quickstart page, and at the time of writing this book. Newer versions of Kafka undoubtely will be available when reading this book. The idea is to "timestamp" the versions used in the example in the Streaming chapter.
1. Download Kafka
```
wget http://apache.claz.org/kafka/2.2.0/kafka_2.12-2.2.0.tgz
```
1. Expand the tar file and enter the new folder
```
tar -xzf kafka_2.12-2.2.0.tgz
cd kafka_2.12-2.2.0
```
1. Start the Zookeeper service that comes with Kafka
```
bin/zookeeper-server-start.sh config/zookeeper.properties
```
1. Start the Kafka service
```
bin/kafka-server-start.sh config/server.properties
```
Make sure to always start Zookeper first, and then Kafka.