Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stage <-- Main #323

Merged
merged 172 commits into from
Jul 17, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
172 commits
Select commit Hold shift + click to select a range
0c64da7
Add files via upload
ShamieCC Dec 7, 2023
e60a290
Update shamie-Resnet.mdx
ShamieCC Dec 19, 2023
49ebc62
Update shamie-Resnet.mdx
ShamieCC Dec 19, 2023
e52bd4d
Update shamie-Resnet.mdx
ShamieCC Dec 19, 2023
f793dcb
Update shamie-Resnet.mdx
ShamieCC Dec 19, 2023
ca643ac
Update shamie-Resnet.mdx
ShamieCC Dec 19, 2023
177e63d
Update shamie-Resnet.mdx
ShamieCC Dec 29, 2023
f241f50
added cycleGAN notebook WIP
charchit7 Jan 10, 2024
0ecbdc3
Merge branch 'johko:main' into main
sezan92 Jan 22, 2024
7cc8015
Address code review comments
klyap Jan 31, 2024
38d22d1
Improve formatting and upload folder instead of 4 files
klyap Feb 4, 2024
e5f7cf5
Move diagram to intro + add more explanations
klyap Feb 16, 2024
e657f64
Update shamie-Resnet.mdx
sezan92 Mar 12, 2024
92bf1e6
Update shamie-Resnet.mdx
sezan92 Mar 12, 2024
e669612
Update shamie-Resnet.mdx
sezan92 Mar 12, 2024
35817c1
Update shamie-Resnet.mdx
sezan92 Mar 12, 2024
9c5302c
Update and rename shamie-Resnet.mdx to Resnet.mdx
sezan92 Mar 12, 2024
b39a9d4
Merge pull request #14 from sezan92/ShamieCC-patch-1
sezan92 Mar 12, 2024
9020fbb
Merge pull request #28 from sezan92/ShamieCC-Resnet
sezan92 Mar 12, 2024
235ca15
renamed
sezan92 Mar 12, 2024
2d74a0b
update toctree
sezan92 Mar 12, 2024
4e8ab10
Merge branch 'johko:main' into develop
sezan92 Mar 13, 2024
2e60364
incorrect merging reverted
themurtazanazir Mar 13, 2024
62446e7
ran make quality
themurtazanazir Mar 13, 2024
e2c703e
Merge pull request #29 from sezan92/googlenetv2
sezan92 Mar 13, 2024
e2a1768
Update chapters/en/Unit 2 - Convolutional Neural Networks/resnet.mdx
sezan92 Mar 13, 2024
f3b7518
Update chapters/en/Unit 2 - Convolutional Neural Networks/resnet.mdx
sezan92 Mar 13, 2024
8e98abe
Update chapters/en/Unit 2 - Convolutional Neural Networks/resnet.mdx
sezan92 Mar 13, 2024
e1008f9
Update chapters/en/Unit 2 - Convolutional Neural Networks/resnet.mdx
sezan92 Mar 13, 2024
83f9508
Update chapters/en/Unit 2 - Convolutional Neural Networks/resnet.mdx
sezan92 Mar 13, 2024
8c63f4a
Update chapters/en/Unit 2 - Convolutional Neural Networks/resnet.mdx
sezan92 Mar 13, 2024
6750283
Update chapters/en/Unit 2 - Convolutional Neural Networks/resnet.mdx
sezan92 Mar 13, 2024
0f42f3f
Update chapters/en/Unit 2 - Convolutional Neural Networks/resnet.mdx
sezan92 Mar 13, 2024
2164046
Merge branch 'johko:main' into develop
sezan92 Mar 13, 2024
c33df45
Apply suggestions from code review
sezan92 Mar 13, 2024
347221e
Update chapters/en/Unit 2 - Convolutional Neural Networks/resnet.mdx
sezan92 Mar 19, 2024
c5c6629
resnet
sezan92 Mar 19, 2024
94db93c
latest
sezan92 Mar 19, 2024
6b5457f
quality
sezan92 Mar 20, 2024
766bcab
resnet block
sezan92 Mar 25, 2024
5690c14
Merge branch 'johko:main' into develop
sezan92 Mar 25, 2024
a3ee63f
Merge branch 'develop' of github.com:sezan92/computer-vision-course i…
sezan92 Mar 25, 2024
5271088
nits resnet paper and image
sezan92 Mar 25, 2024
72cc31e
nits
sezan92 Mar 25, 2024
0b83e66
example execution
sezan92 Mar 25, 2024
f51f04a
quality
sezan92 Mar 25, 2024
c103cc1
use HF dataset in CycleGAN notebook
Mar 25, 2024
7079cd2
Update chapters/en/Unit 2 - Convolutional Neural Networks/resnet.mdx
sezan92 Mar 26, 2024
024c7ad
Update chapters/en/Unit 2 - Convolutional Neural Networks/resnet.mdx
sezan92 Mar 26, 2024
251c23a
Merge pull request #7 from johko/CG_notebook_johko
charchit7 Mar 28, 2024
d7dfcaf
smaller additions and adjustments
Apr 12, 2024
a0a75c0
smooth out multimodal part
Apr 12, 2024
5015699
Merge branch 'main' into better_transitions
Apr 12, 2024
6960a5d
remove vit introduction and reinsert inductive bias part
Apr 12, 2024
d2b4eda
fix toctree
Apr 14, 2024
e3b500b
unit-5 transitions
Apr 19, 2024
9a8fccd
mostly new introductions
Apr 19, 2024
a46fdf1
revert some automatic format changes
Apr 20, 2024
c2a2f3f
one more revert
Apr 20, 2024
2ff1fb9
Update chapters/en/Unit 5 - Generative Models/Introduction/Introducti…
johko Apr 20, 2024
d644329
Update chapters/en/Unit 5 - Generative Models/PRACTICAL APPLICATIONS …
johko Apr 20, 2024
1b5eacd
Update chapters/en/Unit 6 - Basic CV Tasks/introduction.mdx
johko Apr 20, 2024
e1911b9
Update chapters/en/Unit 6 - Basic CV Tasks/introduction.mdx
johko Apr 20, 2024
38db2b1
Update chapters/en/Unit 6 - Basic CV Tasks/introduction.mdx
johko Apr 20, 2024
f40cd17
Update chapters/en/Unit 7 - Video and Video Processing/introduction-t…
johko Apr 20, 2024
23c46e8
Update chapters/en/Unit 8 - 3D Vision, Scene Rendering and Reconstruc…
johko Apr 20, 2024
99d41a1
Update chapters/en/Unit 8 - 3D Vision, Scene Rendering and Reconstruc…
johko Apr 20, 2024
3089c82
Update chapters/en/Unit 7 - Video and Video Processing/introduction-t…
johko Apr 20, 2024
98878db
Update chapters/en/Unit 7 - Video and Video Processing/introduction-t…
johko Apr 20, 2024
14750c2
Update chapters/en/Unit 8 - 3D Vision, Scene Rendering and Reconstruc…
johko Apr 20, 2024
2260e23
Update chapters/en/Unit 8 - 3D Vision, Scene Rendering and Reconstruc…
johko Apr 20, 2024
5845b9a
Update chapters/en/Unit 8 - 3D Vision, Scene Rendering and Reconstruc…
johko Apr 20, 2024
320a643
Update chapters/en/Unit 7 - Video and Video Processing/introduction-t…
johko Apr 20, 2024
963830b
Update chapters/en/Unit 7 - Video and Video Processing/introduction-t…
johko Apr 20, 2024
bfb45ce
Update chapters/en/Unit 8 - 3D Vision, Scene Rendering and Reconstruc…
johko Apr 20, 2024
437e983
Update chapters/en/Unit 8 - 3D Vision, Scene Rendering and Reconstruc…
johko Apr 20, 2024
77a93b3
Update chapters/en/Unit 8 - 3D Vision, Scene Rendering and Reconstruc…
johko Apr 20, 2024
5c2ede2
updated README and added CONTRIBUTING
Apr 21, 2024
01354d4
soem more updates
Apr 21, 2024
95ff70e
Merge pull request #250 from johko/better_transitions
merveenoyan Apr 22, 2024
a8a7c48
Merge pull request #254 from johko/update_readme_and_contributing
merveenoyan Apr 22, 2024
facfab6
Merge pull request #252 from johko/better-transitions-2
merveenoyan Apr 22, 2024
8f2c412
slugify titles and toctree
merveenoyan Apr 24, 2024
e821557
Merge pull request #256 from johko/slugify
johko Apr 24, 2024
f1221db
Update _toctree.yml
lunarflu Apr 24, 2024
cdb44fd
Update _toctree.yml
lunarflu Apr 24, 2024
925c40c
Update _toctree.yml
lunarflu Apr 24, 2024
ba0416b
Update _toctree.yml
lunarflu Apr 24, 2024
3ae57f5
Update _toctree.yml
lunarflu Apr 24, 2024
b47a513
Merge pull request #258 from lunarflu/main
merveenoyan Apr 24, 2024
0378428
Added my contrib
pedrogengo Apr 24, 2024
aa381c0
Fixing Typos and Latex expressions on hyena.mdx
lulmer Apr 24, 2024
9d1f4da
Update intro_to_model_optimization.mdx
adhiiisetiawan Apr 25, 2024
e724864
edit image
JvThunder Apr 25, 2024
1c20efb
Update examples-preprocess.mdx
meetdp Apr 25, 2024
da285d5
Italic rendering problem updated
sergiopaniego Apr 25, 2024
d02f9b6
Small typo fixed
sergiopaniego Apr 25, 2024
c01ff81
Merge branch 'main' of github.com:sezan92/computer-vision-course into…
sezan92 Apr 25, 2024
a2122c6
resnet
sezan92 Apr 25, 2024
0e2b990
resnet
sezan92 Apr 25, 2024
55756e7
Merge pull request #266 from sergiopaniego/small_typo
ATaylorAerospace Apr 25, 2024
7b515d9
Merge pull request #265 from sergiopaniego/italic_typo
ATaylorAerospace Apr 25, 2024
7ad76dc
typo in import statement
omkar-12bits Apr 25, 2024
1482613
Merge pull request #261 from adhiiisetiawan/main
merveenoyan Apr 25, 2024
b47ba80
Merge pull request #260 from pedrogengo/main
merveenoyan Apr 25, 2024
4082561
Merge pull request #259 from lulmer/patch-1
merveenoyan Apr 25, 2024
21faa85
Merge pull request #264 from meetdp/patch-1
merveenoyan Apr 25, 2024
e5b68de
Fix latex
merveenoyan Apr 25, 2024
cc85858
Update linear-algebra.mdx
merveenoyan Apr 25, 2024
ca8af91
fix link
merveenoyan Apr 25, 2024
228160e
Merge pull request #270 from johko/fix_link
ATaylorAerospace Apr 25, 2024
357e9aa
Update chapters/en/unit2/cnns/resnet.mdx
sezan92 Apr 26, 2024
112faed
Update chapters/en/unit2/cnns/resnet.mdx
sezan92 Apr 26, 2024
4f482f0
Update chapters/en/unit2/cnns/resnet.mdx
sezan92 Apr 26, 2024
38bdba2
Update chapters/en/unit2/cnns/resnet.mdx
sezan92 Apr 26, 2024
a6ae04b
Update chapters/en/unit2/cnns/resnet.mdx
sezan92 Apr 26, 2024
736555e
Update chapters/en/unit2/cnns/resnet.mdx
sezan92 Apr 26, 2024
337ac5b
remove alias
omkar-12bits Apr 26, 2024
4437a47
Merge pull request #269 from johko/fix_latex
ATaylorAerospace Apr 26, 2024
659b7de
Merge branch 'sezan92-edit-image-convnext'
JvThunder Apr 26, 2024
e38707b
Merge remote-tracking branch 'origin/main'
JvThunder Apr 26, 2024
dac3a35
Merge pull request #262 from sezan92/edit-image-convnext
JvThunder Apr 26, 2024
ad0a286
removing weird word motivation
bellabf Apr 26, 2024
c1c924f
missing connector
bellabf Apr 26, 2024
f51f12d
abbreviation
bellabf Apr 26, 2024
721752b
missing with
bellabf Apr 26, 2024
e4e15fb
new unit name
bellabf Apr 26, 2024
616bd43
fix chapter name
bellabf Apr 26, 2024
f0ebd3c
naming and tie to other chapters
bellabf Apr 26, 2024
cc03d28
double ":" in applications
bellabf Apr 26, 2024
b8f6fda
british to american spelling
bellabf Apr 26, 2024
f54dec4
Update introduction.mdx (Fix issue #268)
paartheee Apr 26, 2024
4f8a79a
Wrong image url into correct one
paartheee Apr 26, 2024
6c477c5
Added missing quotes to sentence
sergiopaniego Apr 26, 2024
9765c95
fix math typos
omkar-12bits Apr 27, 2024
de4f7c8
docs: Update image.mdx (typos)
genrry Apr 27, 2024
8a7b525
unit 8 fixes - latex stuff and broken link minor corrections
psetinek Apr 28, 2024
415e58e
Update chapters/en/unit2/cnns/resnet.mdx
sezan92 Apr 29, 2024
4008cda
Update chapters/en/unit2/cnns/resnet.mdx
sezan92 Apr 29, 2024
5419817
Update chapters/en/unit2/cnns/resnet.mdx
sezan92 Apr 29, 2024
f6aedd4
Update chapters/en/unit2/cnns/resnet.mdx
sezan92 Apr 29, 2024
ff393a1
Update chapters/en/unit2/cnns/resnet.mdx
sezan92 Apr 29, 2024
d1a92e4
Update chapters/en/unit2/cnns/resnet.mdx
sezan92 Apr 29, 2024
2259cdc
Merge pull request #285 from psetinek/unit8_fixes
merveenoyan Apr 30, 2024
ff4311a
Merge pull request #276 from sergiopaniego/missing_quotes
merveenoyan Apr 30, 2024
abfdf19
Update chapters/en/unit4/multimodal-models/a_multimodal_world.mdx
merveenoyan Apr 30, 2024
e17e96c
Merge pull request #273 from partheee/patch-2
merveenoyan Apr 30, 2024
3e01978
Merge pull request #271 from partheee/patch-1
merveenoyan Apr 30, 2024
23d5de5
Merge pull request #267 from omkar-12bits/master
merveenoyan Apr 30, 2024
a32272d
Merge pull request #227 from sezan92/develop
merveenoyan Apr 30, 2024
926cbf4
Merge pull request #282 from genrry/patch-1
merveenoyan Apr 30, 2024
e03f0d4
Merge pull request #272 from bellabf/main
ATaylorAerospace Apr 30, 2024
0113d12
Merge branch 'johko:main' into googlenet
omkar-12bits Apr 30, 2024
cc6c93a
working latex format
omkar-12bits Apr 30, 2024
d9d541a
$ format issue fix
omkar-12bits Apr 30, 2024
4f5aef0
test html render
psetinek Apr 30, 2024
97d728a
test html
psetinek Apr 30, 2024
567efbe
test html v2
psetinek Apr 30, 2024
8c1af0d
centered all images that needed it and their imgsubtitles
psetinek Apr 30, 2024
e395148
test hyperlink
psetinek Apr 30, 2024
6a99e68
fix hyperlinks
psetinek Apr 30, 2024
d761c38
small fixes
psetinek Apr 30, 2024
5cf951a
small fixes final
psetinek Apr 30, 2024
313e295
Merge pull request #286 from psetinek/unit8_fixes
ATaylorAerospace May 1, 2024
0bdde2e
fixing a typo !
mohammad-gh009 May 2, 2024
0261016
Merge pull request #293 from mohammad-gh009/patch-4
merveenoyan May 2, 2024
6a8a91c
Merge pull request #278 from omkar-12bits/googlenet
merveenoyan May 2, 2024
3556121
Delete the duplicate sentence!
mohammad-gh009 May 3, 2024
f2fa020
Merge pull request #296 from mohammad-gh009/patch-5
merveenoyan May 7, 2024
0823c96
add johko as codeowner for chapters and notebooks as well
Jun 28, 2024
371b18d
Merge pull request #175 from hwaseem04/CG_notebook
johko Jun 28, 2024
e65cc91
Merge pull request #319 from johko/add-johko-as-codeowner
lunarflu Jul 3, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/CODEOWNERS
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,6 @@
# modifies files in /chapters/ or /notebooks/, only the here mentioned
# and not the global owner(s) will be requested for a review.

/chapters/ @merveenoyan
/chapters/ @merveenoyan @johko

/notebooks/ @merveenoyan
/notebooks/ @merveenoyan @johko
135 changes: 135 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,135 @@
# How To Contribute

Hey 👋, great that you want to contribute to the Community Computer Vision Course! We're happy to hear your suggestions and ideas!

### Adding content to the course
*Important Note: If you’ve never contributed to open-source projects on GitHub, kindly read [this document](https://www.notion.so/19411c29298644df8e9656af45a7686d?pvs=25) which shows to do so with an example for the skops repository.*

1. First go to the [discussion section](https://github.com/johko/computer-vision-course/discussions/).
2. Here you find a section for each unit of the course. Go to the unit you want to contribute to. Open a new discussion and describe what you want to add.
3. Wait for an approval from the repository maintainers or change requests
4. When your suggestions is approved, follow these steps:
1. Create an `.mdx` file or Jupyter Notebook for the topic you want to contribute to
2. Please carefully read through our [Content Guidelines](#📝-content-guidelines)
3. When you feel like you are ready, create a pull request to this repository. See [How to Create a Pull Request](#how-to-create-a-pull-reques)


### Typos/Bug fixes
1. Open an [Issue](https://github.com/johko/computer-vision-course/issues) describing which content you want to add, change or fix
2. Wait for an approval from the repository maintainers
3. Follow the steps below to create a PR


### How to Create a Pull Request (PR)
1. Fork the repository
2. Create a new branch for your changes
3. Make your changes
4. Create a pull request to the [stage](https://github.com/johko/computer-vision-course/tree/stage) branch of the main repository
5. Wait for the maintainers to approve your PR
6. Celebrate your contribution 🥳🎉

### Releases
We will collect contributions in the *stage* branch and publish new releases of the course in regular time intervals.


## 📝 Content Guidelines

**Syntax and Doc Rules ❗️❗️**

These rules are required to render the course on hf.co/learn 😊
1. Every chapter should have a main header (h1, e.g. # Introduction) before the content.
2. We can use the syntax features in Hugging Face course! This includes 👇

**Tip Blocks**
Write tips like so:
```
<Tip>
Write your note here
</Tip>
```
You can write warnings like this:
`<Tip warning={true}>`

**Framework Dependent Code**
To have multiple frameworks in one code snippet with a toggle, you can do:
```
<frameworkcontent>
<pt>
PyTorch content goes here
</pt>
<tf>
TensorFlow content goes here
</tf>
<flax>
Flax content goes here
</flax>
</frameworkcontent>
```
**Embedding Gradio Demos**
You can embed any Gradio demo that is hosted on Hugging Face Spaces like below 👇 Just provide the `src` the url `{{username}}-{{space_id}}.hf.space`.
```
<iframe
src="https://openai-whisper.hf.space"
frameborder="0"
width="850"
height="450">
</iframe>
```

**Anchor Links for Headers**
If you want to refer to a section inside the text, you can do it like below 👇
```
## My awesome section[[some-section]]
// the anchor link is: `some-section`
```

**Code Blocks**
You can write codeblocks by wrapping it with three backticks. Please add the associated language code, e.g. `py` or `bash` after top backticks to enable language specific rendering of code blocks.

**LaTeX**
You can write in-line LaTeX by writing it like this: ` \\( X )\\`
You can write stand alone LaTeX by enclosing with `$$`.
For example 👇
```
$$Y = X * \textbf{dequantize}(W); \text{quantize}(W)$$
```


3. Add your chapter to `_toctree.yml`.

- Note that the directory structure is as follows, so when you add a new chapter, make sure to stick to it:
```
.
└── course/
└── chapters/
├── chapter0/
│ ├── introduction.mdx
│ └── getting_started.mdx
├── chapter1/
│ └── ...mdx
└── _toctree.yml
```

- If you need advice on the tone of your content, feel free to check out [Hugging Face Audio Course](https://huggingface.co/learn/audio-course/chapter0/introduction) as it's a good example.

- If you have any images, videos and more in your PRs, please store them in [this Hugging Face repository](https://huggingface.co/datasets/hf-vision/course-assets) to keep this repository lightweight. You can ask for an access to the organization if you aren't a part of it yet. The steps to do so are below 👇
1. Request to join the https://huggingface.co/hf-vision organization.
2. Upload an image to https://huggingface.co/datasets/hf-vision/course-assets, e.g. via the web UI.
3. Get the URL (e.g. if there's blob in the link replace with resolve https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/all_models.png) right click to "Download" button and copy the link.
4. Use that in standard markdown like ![image](link-to-image)

### Notebooks

Thanks to Hugging Face's documentation builder, when we add `[[open-in-colab]]` on top of `.mdx` files, it creates a button that you can automatically open a notebook containing the notebook version of your markdown file. If you still want to create a notebook that is separate from markdown (and the markdown is complete by means of context you're providing in the notebook), you can do so, in the notebooks folder. Make sure to add it to the associated chapter's folder under notebooks (and if it doesn't exist, feel free to create it).


### 🗣 Asking for Help

Do not hesitate to ask for help in #cv-community-project channel on the Hugging Face discord. 🫂

### Tips and Hints


- For an easier collaboration when **working on notebooks together**, feel free to use [ReviewNB](https://www.reviewnb.com/), which is free for open-source and educational use cases.

- In the requirements.txt file you can find some packages that can be helpful when creating the material. As we're originating from the HuggingFace community, we can recommend using the [transformers](https://huggingface.co/docs/transformers/), [datasets](https://huggingface.co/docs/datasets/), [evaluate](https://huggingface.co/docs/evaluate/) and [timm](https://huggingface.co/docs/timm/) libraries.
218 changes: 33 additions & 185 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,200 +1,48 @@
![Course](https://github.com/johko/computer-vision-course/assets/53175384/58e39903-5a3a-4d48-8f3c-5811f31b93b5)

# Community-led Computer Vision Community Course Sprint 🤗
# Community-led Computer Vision Community Course 🤗

This is the repository for a community-led course on Computer Vision. Once finished, the course will be released in huggingface.co/learn. Below, you can find how you can help us in this effort.
This is the repository for a community-led course on Computer Vision. Over 60 contributors from the Hugging Face Computer Vision community have worked together on the content for this course.

### Table of Contents
## Course Table of Contents
0. Welcome
1. Fundamentals
2. Convolutional Neural Networks
3. Vision Transformers
4. Multimodal Models
5. Generative Models
6. Basic CV Tasks
7. Video and Video Processing
8. 3D Vision, Scene Rendering and Reconstruction
9. Model Optimization
10. Synthetic Data Creation
11. Zero Shot Computer Vision
12. Ethics and Biases
13. Outlook

- 🏃‍♂️ **How to get started**
- 🤝 **How to contribute to course**
- 📆 **Deadlines**
- 💝 **Prizes**
- 📝 **Content guidelines (Syntax, Notebooks, Becoming a Reviewer)**
- 🫂 **Asking for Help**
## Community Powered
The result you have in front of you is as diverse as the community. A typical educational course is created by a small group of people, who try to match the tone of each other closely. We took a different road. While following a plan on which content we wanted to include, all authors had freedom in the choice of their style. Other members of the community reviewed the content and approved or made change suggestions.

### 🏃‍♂️ How to get started
The outcome is a truly unique course and proof of what a strong open-source community can achieve.

1. Join us in Discord 👾
If you want to **contribute** content or suggest some **typo/bug fixes**, head over to the [Contribution Guidelines](CONTRIBUTING.md).

Join [the Hugging Face discord](https://discord.gg/hugging-face-879548962464493619), take the role open-source and join us at the channel #cv-community-project.
<img width="491" alt="image" src="https://github.com/lunarflu/fork-computer-vision-course/assets/70143200/c13d5b34-ed1c-4f12-b044-192484b94f9d">
<img width="180" alt="image" src="https://github.com/lunarflu/fork-computer-vision-course/assets/70143200/b3372a47-711f-4b43-bc85-0ba2b6f8b914">

2. Pick a section from the [table of contents](https://docs.google.com/spreadsheets/d/1fjmbsdGwe7IUMBv74LDC7IpoJy8ijiFdzGdnDlBv6eA/edit#gid=0) and add your first name and discord username there.
(Please only sign up for two chapters maximum for now.)
3. Connect with your team members in Discord




### 🤝 How to contribute to the course
Important Note: If you’ve never contributed to open-source projects on GitHub, kindly read [this document](https://www.notion.so/19411c29298644df8e9656af45a7686d?pvs=25) to learn how to do so.

Before you, start reading more on the contribution guidelines, this is how our course outline looks like:

```
.

└── chapters/
├── chapter0/
│ ├── introduction.mdx
│ └── getting_started.mdx
├── chapter1/
│ └── ...mdx
└── _toctree.yml
```

1. First go to the [discussion section](https://github.com/johko/computer-vision-course/discussions/).
2. Each new chapter outline should be put up under a pinned discussion. As shown in this [example](https://github.com/johko/computer-vision-course/discussions/80).
3. Each chapter can contain multiple topics and sub-topics. All of that should be defined in the chapter outline. Also, for each sub-topic, a single contributor should be assigned to work on that. See more in this [example](https://github.com/johko/computer-vision-course/discussions/80).
4. Interested contributors motivated to work in a common chapter can come together to form a team. Where each contributor can directly contribute to a sub-topic and other fellow teammates can act as reviewers.
5. One contributor from each team should fork the repository and other contributors should mutually agree on working collaboratively under that forked repo.
6. Under the forked repo, contributors should create issues (that will be the sub-topic name) and start working on that issue.
7. When contributors work on a subtopic, they should follow the below instructions:
1. Create `.mdx` files or Jupyter Notebooks for the sub-topics you want to contribute to
2. Make sure to update the requirements.txt file in the root of the repository
3. When you feel like you are ready, create a pull request to this repository.
4. Your teammates will review your PR under that forked repo and then if things get approved, you can create the PR to merge it in our main branch 🤗


**Tip:** Contribute one subsection at a time, so that it’s not overwhelming for both you and reviewers.


## 📆 Deadlines

We are aiming to have the version 1 of the course with this sprint.

**Sprint Beginning:** Dec 3rd

**Submission for Pull Request Reviews Deadline:** Dec 13th

**Iterating over Pull Reviews until:** Dec 27th

First version will be complete before December 29th.

## 💝 Prizes

We will have prizes for those who contribute to the course 🤗
Moreover, each contributor will have their name added to credits section of the associated chapter.

- We will provide one month of Hugging Face PRO subscription or GPU grant.
- You can earn the special merch made for this sprint (with the logo of this sprint). 👕👚
If you are curious about the Hugging Face Computer Vision Community, read on 🔽

The amount of contribution required to earn the prizes will be announced. Stay tuned!

### 📝 Content Guidelines
## Hugging Face Computer Vision Community
Join us in Discord 👾

**Syntax and Doc Rules ❗️❗️**

These rules are required to render the course on hf.co/learn 😊
1. Every chapter should have a main header (h1, e.g. # Introduction) before the content.
2. We can use the syntax features in Hugging Face course! This includes 👇

**Tip Blocks**
Write tips like so:
```
<Tip>
Write your note here
</Tip>
```
You can write warnings like this:
`<Tip warning={true}>`

**Framework Dependent Code**
To have multiple frameworks in one code snippet with a toggle, you can do:
```
<frameworkcontent>
<pt>
PyTorch content goes here
</pt>
<tf>
TensorFlow content goes here
</tf>
<flax>
Flax content goes here
</flax>
</frameworkcontent>
```
**Embedding Gradio Demos**
You can embed any Gradio demo that is hosted on Hugging Face Spaces like below 👇 Just provide the `src` the url `{{username}}-{{space_id}}.hf.space`.
```
<iframe
src="https://openai-whisper.hf.space"
frameborder="0"
width="850"
height="450">
</iframe>
```

**Anchor Links for Headers**
If you want to refer to a section inside the text, you can do it like below 👇
```
## My awesome section[[some-section]]
// the anchor link is: `some-section`
```

**Code Blocks**
You can write codeblocks by wrapping it with three backticks. Please add the associated language code, e.g. `py` or `bash` after top backticks to enable language specific rendering of code blocks.

**LaTeX**
You can write in-line LaTeX by writing it like this: ` \\( X )\\`
You can write stand alone LaTeX by enclosing with `$$`.
For example 👇
```
$$Y = X * \textbf{dequantize}(W); \text{quantize}(W)$$
```


3. Add your chapter to `_toctree.yml`.

- Note that the directory structure is as follows, so when you add a new chapter, make sure to stick to it:
```
.
└── course/
└── chapters/
├── chapter0/
│ ├── introduction.mdx
│ └── getting_started.mdx
├── chapter1/
│ └── ...mdx
└── _toctree.yml
```

- If you need advice on the tone of your content, feel free to check out [Hugging Face Audio Course](https://huggingface.co/learn/audio-course/chapter0/introduction) as it's a good example.

- Before contributing, please read the general [contribution guide](https://huggingface2.notion.site/Contribution-Guide-19411c29298644df8e9656af45a7686d?pvs=4).

- If you have any images, videos and more in your PRs, please store them in [this Hugging Face repository](https://huggingface.co/datasets/hf-vision/course-assets) to keep this repository lightweight. You can ask for an access to the organization if you aren't a part of it yet. The steps to do so are below 👇
1. Request to join the https://huggingface.co/hf-vision organization.
2. Upload an image to https://huggingface.co/datasets/hf-vision/course-assets, e.g. via the web UI.
3. Get the URL (e.g. if there's blob in the link replace with resolve https://huggingface.co/datasets/huggingface-course/audio-course-images/resolve/main/all_models.png) right click to "Download" button and copy the link.
4. Use that in standard markdown like ![image](link-to-image)

### Notebooks

Thanks to Hugging Face's documentation builder, when we add `[[open-in-colab]]` on top of `.mdx` files, it creates a button that you can automatically open a notebook containing the notebook version of your markdown file. If you still want to create a notebook that is separate from markdown (and the markdown is complete by means of context you're providing in the notebook), you can do so, in the notebooks folder. Make sure to add it to the associated chapter's folder under notebooks (and if it doesn't exist, feel free to create it).

### Become a Reviewer

Is everything already assigned, but do you want to contribute to the course? No worries, you can still become a reviewer! This will allow you to review the notebooks and READMEs and give feedback to the authors.

1. Go [here](https://docs.google.com/spreadsheets/d/1fjmbsdGwe7IUMBv74LDC7IpoJy8ijiFdzGdnDlBv6eA/edit#gid=0) and at the bottom choose the "Reviewers" tab,
2. Add your first name and GitHub username,
3. You'll be added to the repo as a contributor and will be able to review Pull Requests (PRs)!
4. Assign yourself to any PRs you think make sense for you,
5. While reviewing, ask yourself if the changes and additions make sense - After all, the most important part of a course is conveying ideas properly, which makes the learning experience smoother.
6. Share your feedback and ideas with the authors on how they can improve. In the long term, we're aiming to make a robust, high-quality course accessible to everyone!
7. Finally, every PR can be merged when it has two approvals from reviewers.

### 🗣 Asking for Help

Do not hesitate to ask for help in #cv-community-project channel. 🫂
Join [the Hugging Face discord](https://discord.gg/hugging-face-879548962464493619), take the role open-source and join us at the channel #cv-community-project for discussions about the course. You can also check out the #computer-vision channel for more general discussions and questions about Computer Vision.
<img width="491" alt="image" src="https://github.com/lunarflu/fork-computer-vision-course/assets/70143200/c13d5b34-ed1c-4f12-b044-192484b94f9d">
<img width="180" alt="image" src="https://github.com/lunarflu/fork-computer-vision-course/assets/70143200/b3372a47-711f-4b43-bc85-0ba2b6f8b914">

### Tips and Hints
### Contributors

<a href="https://github.com/johko/computer-vision-course/graphs/contributors">
<img src="https://contrib.rocks/image?repo=johko/computer-vision-course" />
</a>

- For an easier collaboration when **working on notebooks together**, feel free to use [ReviewNB](https://www.reviewnb.com/), which is free for open-source and educational use cases.
### Star History

- In the requirements.txt file you can find some packages that can be helpful when creating the material. As we're originating from the HuggingFace community, we can recommend using the [transformers](https://huggingface.co/docs/transformers/), [datasets](https://huggingface.co/docs/datasets/), [evaluate](https://huggingface.co/docs/evaluate/) and [timm](https://huggingface.co/docs/timm/) libraries.
[![Star History Chart](https://api.star-history.com/svg?repos=johko/computer-vision-course&type=Date)](https://star-history.com/#johko/computer-vision-course&Date)
Loading
Loading