-
Notifications
You must be signed in to change notification settings - Fork 3.1k
GSoC 2023
Nikita Manovich edited this page Feb 7, 2023
·
8 revisions
Date (2023) | Description | Comment |
---|---|---|
February 7 | Mentoring organization application deadline | 👍 |
- GSoC Home Page
- CVAT project ideas list
- CVAT.ai home site
- CVAT Wiki
- Source Code can be found at GitHub/CVAT
Mailing list to discuss: cvat-gsoc-2023 mailing list
- Load and visualize 16-bit medical images
- Keyboard shortcuts customization
- Quality control: consensus
- Quality control: honeypot
- Embedded specification for a dataset
- Localization support
All work is in Python and TypeScript unless otherwise noted.
-
- Description: All digital projection X-ray in DICOM is more than 8 bits and hence encoded in two bytes, even if not all 16 bits are used. Right now CVAT converts 16-bit images into 8-bit. For medical images it leads to losing important information and it isn't possible to annotate such data efficiently. A doctor should adjust the contract of some regions manually to annotate such visual data.
-
Expected Outcomes:
- Upload digital projection X-ray in DICOM and convert it to 16-bit PNG.
- Visualize 16-bit PNG image in the browser using WebGL.
- Implement brightness, inverting, contract, saturation using WebGL.
- Import/Export datasets in CVAT format.
- Add functional tests and documentation
- Resources:
- Skills Required: Python, TypeScript, WebGL
- Possible Mentors: Boris Sekachev
- Difficulty: Hard
-
-
Description: In many case to have good data annotation speed users need to use mouse, keyboard, and other input devices effectively. One way is to customize keyboard shortcuts and adapt them for a specific use case. For example, if you have several labels in your task, it can be important to assign a shortcut for each label and use them to switch quickly between them and annotate faster. Other users want to lock/unlock an object quickly.
-
Expected Outcomes:
- It should be possible to configure shortcuts in settings and save them per user.
- Add functional tests and documentation
-
Resources:
-
Skills Required: TypeScript, React
-
Possible Mentors: Maria Khrustaleva
-
Difficulty: Medium
-
-
- Description: If you use crowd to annotate an image, the easiest way to get high quality annotations for a task is to annotate the same image multiple times. After that you can compare labels from multiple annotators to produce high-quality results. Let's say you try to estimate age of people. The task is very subjective. An averaged answer from multiple annotators can help you predict more precise age for a person.
-
Expected Outcomes:
- It should be possible to create multiple jobs for the same segment of images (https://github.com/opencv/cvat/issues/125)
- Support a number of built-in algorithms to merge annotations for a segment: voting, averaging, raw (put all annotations as is)
- Update tests and documentation
- Resources:
- Skills Required: Python, Django
- Possible Mentors: Maxim Zhiltsov
- Difficulty: Medium
-
- Description: TBD
-
Expected Outcomes:
- TBD
- Resources: * TBD
- Skills Required: TBD
- Possible Mentors: TBD
- Difficulty: TBD
-
- Description:
-
Expected Outcomes:
- TBD
- TBD
- TBD
- TBD
- Resources:
- Skills Required: good software development skills, fluent Python and TypeScript
- Possible Mentors: Boris Sekachev
- Duration: 175 hours
-
- Description: TBD
-
Expected Outcomes:
- TBD
- Resources: * TBD
- Skills Required: TBD
- Possible Mentors: TBD
- Difficulty: TBD
1. #### _IDEA:_ <Descriptive Title>
* ***Description:*** 3-7 sentences describing the task
* ***Expected Outcomes:***
* < Short bullet list describing what is to be accomplished >
* <i.e. create a new module called "bla bla">
* < Has method to accomplish X >
* <...>
* ***Resources:***
* [For example a paper citation](https://arxiv.org/pdf/1802.08091.pdf)
* [For example an existing feature request](https://github.com/opencv/cvat/pull/5608)
* [Possibly an existing related module](https://github.com/opencv/cvat/tree/develop/cvat/apps/opencv) that includes OpenCV JavaScript library.
* ***Skills Required:*** < for example mastery plus experience coding in Python, college course work in vision that covers AI topics, python. Best if you have also worked with deep neural networks. >
* ***Possible Mentors:*** < your name goes here >
* ***Difficulty:*** <Easy, Medium, Hard>
Maxim Zhiltsov
Boris Sekachev
Roman Donchenko
Maria Krustaleva
Andrey Zhavoronkov
Nikita Manovich
Anna Petrovicheva