Skip to content

Commit

Permalink
...
Browse files Browse the repository at this point in the history
  • Loading branch information
sebffischer committed Jul 4, 2024
1 parent ec7da3f commit 48ffe4c
Show file tree
Hide file tree
Showing 5 changed files with 20 additions and 43 deletions.
5 changes: 2 additions & 3 deletions R/PipeOpTorchModel.R
Original file line number Diff line number Diff line change
Expand Up @@ -74,11 +74,11 @@ PipeOpTorchModel = R6Class("PipeOpTorchModel",
private$.learner$network_stored = network
private$.learner$ingress_tokens = md$ingress

if (is.null(md$loss) && is.null(self$learner$loss)) {
if (is.null(md$loss)) {
stopf("No loss configured in ModelDescriptor. Use (\"torch_loss\").")
}
self$learner$loss = md$loss
if (is.null(md$optimizer) && is.null(self$learner$optimizer)) {
if (is.null(md$optimizer)) {
stopf("No optimizer configured in ModelDescriptor. Use po(\"torch_optimizer\").")
}
self$learner$optimizer = md$optimizer
Expand Down Expand Up @@ -192,4 +192,3 @@ PipeOpTorchModelRegr = R6Class("PipeOpTorchModelRegr",
#' @include zzz.R
register_po("torch_model_regr", PipeOpTorchModelRegr)
register_po("torch_model_classif", PipeOpTorchModelClassif)

4 changes: 2 additions & 2 deletions R/learner_torch_methods.R
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,9 @@ learner_torch_train = function(self, private, super, task, param_vals) {
}

network = private$.network(task, param_vals)$to(device = param_vals$device)
if (is.null(self$optimizer)) stopf("Learner '%s' defines no optimizer", self$id)
optimizer = self$optimizer$generate(network$parameters)
if (is.null(self$loss)) stopf("Learner '%s' defines no loss", self$id)
loss_fn = self$loss$generate()

measures_train = normalize_to_list(param_vals$measures_train)
Expand Down Expand Up @@ -307,5 +309,3 @@ measure_prediction = function(pred_tensor, measures, task, row_ids, prediction_e
}
)
}


12 changes: 0 additions & 12 deletions R/utils.R
Original file line number Diff line number Diff line change
Expand Up @@ -235,18 +235,6 @@ check_nn_module = function(x) {
check_class(x, "nn_module")
}

nn_none = nn_module("none",
initialize = function() {
stop("nn_none should not be initialized, something went wrong")
}
)

optim_none = optimizer("optim_none",
initialize = function() {
stop("optim_none should not be initialized, something went wrong")
}
)

check_callbacks = function(x) {
if (test_class(x, "TorchCallback")) {
x = list(x)
Expand Down
27 changes: 12 additions & 15 deletions README.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ learner_mlp = lrn("classif.mlp",
batch_size = 16,
epochs = 50,
device = "cpu",
# proportion of data for validation
# Proportion of data to use for validation
validate = 0.3,
# Defining the optimizer, loss, and callbacks
optimizer = t_opt("adam", lr = 0.1),
Expand All @@ -77,17 +77,13 @@ learner_mlp = lrn("classif.mlp",
)
```

The code below evaluates the learner using a simple holdout resampling.
Below, we train this learner on the sonar example task:

```{r}
resample(
task = tsk("iris"),
learner = learner_mlp,
resampling = rsmp("holdout")
)
learner_mlp$train(tsk("sonar"))
```

Below, we construct the same architecture using `PipeOpTorch` objects.
Next, we construct the same architecture using `PipeOpTorch` objects.
The first pipeop -- a `PipeOpTorchIngress` -- defines the entrypoint of the network.
All subsequent pipeops define the neural network layers.

Expand All @@ -112,7 +108,7 @@ graph_lrn = as_learner(graph_mlp)
```

To work with generic tensors, the `lazy_tensor` type can be used.
It wraps a `torch::dataset`, but allows to preprocess the data (lazily) using `PipeOp` objects
It wraps a `torch::dataset`, but allows to preprocess the data (lazily) using `PipeOp` objects.
Below, we flatten the MNIST task, so we can then train a multi-layer perceptron on it.
Note that this does *not* transform the data in-memory, but is only applied when the data is actually loaded.

Expand All @@ -128,7 +124,8 @@ mnist_flat = flattener$train(list(mnist))[[1L]]
mnist_flat$head(3L)
```

To actually access the tensors, we can call `materialize()`, but only show a slice for readability:
To actually access the tensors, we can call `materialize()`.
We only show a slice of the resulting tensor for readability:

```{r}
materialize(
Expand All @@ -137,8 +134,8 @@ materialize(
)[1:2, 1:4]
```

We now define a more complex architecture that has one single input which is a `lazy_tensor`.
For that, we first deine a single residual block:
Below, we define a more complex architecture that has one single input which is a `lazy_tensor`.
For that, we first define a single residual block:

```{r}
layer = list(
Expand All @@ -150,8 +147,8 @@ layer = list(
```

Next, we create a neural network that takes as input a `lazy_tensor` (`po("torch_ingress_num")`).
It first applies a linear layer and then repeat the above layer using the special `PipeOpTorchBlock`, followed by the network's head.
After that, we configure the loss and the optimizer and the training parameters.
It first applies a linear layer and then repeats the above layer using the special `PipeOpTorchBlock`, followed by the network's head.
After that, we configure the loss, optimizer and the training parameters.
Note that `po("nn_linear_0")` is equivalent to `po("nn_linear", id = "nn_linear_0")` and we need this here to avoid ID clashes with the linear layer from `po("nn_block")`.

```{r}
Expand All @@ -174,7 +171,7 @@ deep_learner = as_learner(
deep_learner$id = "deep_network"
```

In order to keep track of the performance during training, we use 20% of the data and evaluate the classification accuracy.
In order to keep track of the performance during training, we use 20% of the data and evaluate it using classification accuracy.

```{r}
set_validate(deep_learner, 0.2)
Expand Down
15 changes: 4 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ learner_mlp = lrn("classif.mlp",
batch_size = 16,
epochs = 50,
device = "cpu",
# proportion of data for validation
# Proportion of data to use for validation
validate = 0.3,
# Defining the optimizer, loss, and callbacks
optimizer = t_opt("adam", lr = 0.1),
Expand All @@ -67,20 +67,13 @@ learner_mlp = lrn("classif.mlp",
)
```

The code below evaluates the learner using a simple holdout resampling.
Below, we train this learner on the sonar example task:

``` r
resample(
task = tsk("iris"),
learner = learner_mlp,
resampling = rsmp("holdout")
)
#> <ResampleResult> with 1 resampling iterations
#> task_id learner_id resampling_id iteration warnings errors
#> iris classif.mlp holdout 1 0 0
learner_mlp$train(tsk("sonar"))
```

Below, we construct the same architecture using `PipeOpTorch` objects.
Next, we construct the same architecture using `PipeOpTorch` objects.
The first pipeop – a `PipeOpTorchIngress` – defines the entrypoint of
the network. All subsequent pipeops define the neural network layers.

Expand Down

0 comments on commit 48ffe4c

Please sign in to comment.