-
Notifications
You must be signed in to change notification settings - Fork 1
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #63 from UCLAIS/website_update
Website update
- Loading branch information
Showing
6 changed files
with
32 additions
and
4 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
14 changes: 14 additions & 0 deletions
14
our-initiatives/tutorials/2024-2025/intro_to_transformers.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,14 @@ | ||
--- | ||
sidebar_position: 11 | ||
--- | ||
|
||
# 9: Introduction to Transformers | ||
|
||
**Date: 11th December 2024** | ||
|
||
💡 **Transformers** were initially introduced for the purpose of **machine translation**, but is now the most prevalent (State Of The Art) architecture used for virtually all deep learning tasks. Unlike traditional neural networks, Transformers rely on a mechanism called **attention**, which allows them to focus on relevant parts of the input sequence. Unlike RNNs this architecture takes in sequential input data in parallel. | ||
|
||
Central to this model are the **encoder-decoder blocks**, where input data undergoes **tokenization** and is embedded into vectors with **positional encodings** to capture word order. This week, we will explore the **attention mechanism**, including **multi-headed attention**, the structure of **encoder and decoder blocks**, and the processes involved in **training Transformers**, such as **tokenization, masking strategies**, and managing **computational costs**. | ||
💡 | ||
|
||
You can access our **slides** here: 💻 [**Tutorial 9 Slides**](https://www.canva.com/design/DAGYOwRh8u8/xn2OqkUHgTGClSoYOhSxYQ/view?utm_content=DAGYOwRh8u8&utm_campaign=designshare&utm_medium=link2&utm_source=uniquelinks&utlId=ha097b37913) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -2,7 +2,7 @@ | |
sidebar_position: 7 | ||
--- | ||
|
||
# 4: Neural Networks | ||
# 5: Neural Networks | ||
|
||
**Date: 13th November 2024** | ||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,14 @@ | ||
--- | ||
sidebar_position: 10 | ||
--- | ||
|
||
# 8: Recurrent Neural Networks | ||
|
||
**Date: 4th December 2024** | ||
|
||
💡 **Recurrent Neural Networks (RNNs)** are a class of models designed to handle sequential data, such as **time series** or **language**, by using **feedback loops** to maintain **context** over time. This week, we will explore the fundamentals of RNNs, the challenges of training them—especially backpropagation through time—and the introduction of variants like **Long Short-Term Memory (LSTM)** networks that better capture **long-term dependencies**. We will briefly mention contrast these approaches with **transformers**, which have largely replaced RNNs and LSTMs in state-of-the-art applications by using self-attention mechanisms to model sequence elements in parallel, ultimately offering a broader perspective on modern sequence modeling techniques.💡 | ||
|
||
You can access our **demonstration notebook** here: 📘 [**Tutorial 8 Notebook**](https://github.com/UCLAIS/ml-tutorials-season-5/blob/main/week-8/rnn.ipynb) | ||
|
||
You can access our **slides** here: 💻 [**Tutorial 8 Slides**](https://www.canva.com/design/DAGSEPaNv_I/RpD2FqJCqnRyZxwa_cvsGQ/view?utm_content=DAGSEPaNv_I&utm_campaign=designshare&utm_medium=link2&utm_source=uniquelinks&utlId=h053c9bd49f) | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -2,7 +2,7 @@ | |
sidebar_position: 8 | ||
--- | ||
|
||
# 5: Visual Computing I | ||
# 6: Visual Computing I | ||
|
||
**Date: 20th November 2024** | ||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters