Skip to content

Commit

Permalink
Add rnn and ch workshop content.
Browse files Browse the repository at this point in the history
  • Loading branch information
angela24680403 committed Dec 1, 2023
1 parent 8838e8e commit 53452c6
Show file tree
Hide file tree
Showing 2 changed files with 11 additions and 1 deletion.
8 changes: 7 additions & 1 deletion our-initiatives/tutorials/climate-hack.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,10 @@ sidebar_position: 16

# ClimateHack.AI Workshop

**Date: Coming soon!**
**Date: 8th December 2023 (Friday 5pm)**

💡 **Climate Hack.AI has been launched!** This workshop is all about helping you get started with this hackathon, providing tips and tricks to improve your model to move up the leaderboard🙌🏼.

**Motivation**: The **National Grid ESO** currently relies on expensive and carbon-intensive **natural gas generators** to compensate for the variability of **solar PV power production**. By incorporating **satellite imagery** into near-term solar **PV forecasting models**, the ESO can improve the accuracy of these forecasts and **reduce its reliance on natural gas generators**.

We will be going over each stage of the **ML flow** from preprocessing data to choosing and training your models, so that you can develop one for **site-level PV forecasting** over the next four hours that is both accurate and performant. Your contributions could directly help **cut carbon emissions** in Great Britain by up to **100 kilotonnes** per year!! 💡
4 changes: 4 additions & 0 deletions our-initiatives/tutorials/rnns.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,3 +5,7 @@ sidebar_position: 9
# 7: Recurrent Neural Networks

**Date: 6th November 2023**

💡 **Recurrent neural networks** (RNNs) are a type of artificial neural network (ANN) that are well-suited for processing **sequential data**, such as **time series** data or **natural language**. Unlike **feedforward neural networks**, where information flows in one direction, RNNs have **feedback loops** that allow them to retain information about previous inputs. This week we will be covering what RNNs are, how to train such models, the problems faced with RNN **backpropagation**, and introduce **variations of RNNs** such as the **long short-term memory (LSTM)** model. 💡

You can access our **slides** here: 💻 [**Tutorial 7 Slides**](https://www.canva.com/design/DAF0bkkh7uE/PlWo9_wcOAVhDKP_SdE56g/edit?utm_content=DAF0bkkh7uE&utm_campaign=designshare&utm_medium=link2&utm_source=sharebutton)

0 comments on commit 53452c6

Please sign in to comment.