Skip to content

Commit

Permalink
WIP
Browse files Browse the repository at this point in the history
  • Loading branch information
johann-petrak committed Apr 10, 2024
1 parent 7d99097 commit f43cb18
Show file tree
Hide file tree
Showing 5 changed files with 151 additions and 56 deletions.
22 changes: 22 additions & 0 deletions site/announcement.md
Original file line number Diff line number Diff line change
@@ -1,2 +1,24 @@
GermEval 2024: GerMS Sexism Detection in German Online News Fora

CALL FOR PARTICIPATION

GermEval 2024: GerMS
(Sexism Detection in German Online News Fora)

9 September 2024 at KONVENS 2024, Vienna, Austria

[https://ofai.github.io/GermEval2024-GerMS/](https://ofai.github.io/GermEval2024-GerMS/)

---- Introduction ----


---- Task description ----


---- Timeline ----


---- Organizers ----



13 changes: 13 additions & 0 deletions site/download.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# GermEval2024 GerMS - Download

On this page, the files for training and labeling can be downloaded
for each of the phases of the GermEval2024 GerMS competition.


## Trial Phase


## Development Phase


## Competition Phase
4 changes: 2 additions & 2 deletions site/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,11 +54,11 @@ are organized into two different tracks:

## Timeline

* **Trial phase**: April 14 - April 30, 2024
* **Trial phase**: April 16 - April 30, 2024
* A small labeled dataset for training and a small unlabeled dataset to use for the submission are provided. This phase is for getting to know the
problem, dataset format, how to submit predictions, how submissions are evaluated and the evaluation shows up on the leaderboard etc.
* **Development phase**: May 1 - June 6, 2024
* During this phase, a labeled training set and an unlabeled test set are made available. The training set will contain the labeled versions of the
* During this phase, a labeled training set and an unlabeled test set are made available. The training set will contain the updated labeled versions of the
training and test set of the previous phase plus additional labeled examples. Submissions have to contain the predictions for the unlabeled test set
and the evaluation of the submission will sho up on the leaderbord.
* **Competition phase**: June 7 - June 25, 2024
Expand Down
35 changes: 21 additions & 14 deletions site/subtask1.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ While the annotation guidelines define what kind of sexism/misogyny should get a
give rules about how to decide on the strength. For this reason, if an annotator decided that sexism/misogyny is present in a text,
the strength assigned is a matter of personal judgement.

The labels to predict in subtask one reflect different strategies for how multiple labels from annotators can be use to derive a final
The labels to predict in subtask 1 reflect different strategies for how multiple labels from annotators can be use to derive a final
target label:

* `bin_maj`: predict `1` if a majority of annotators assigned a label other than `0-Kein`, predict `0` if a majority of annotators assigned a label
Expand All @@ -31,11 +31,18 @@ target label:

## Data

For the development phase of subtask 1, we provide all participants with the following data:
For the *trial phase* of subtask 1, we provide a small dataset, containing
* a small labeled dataset containing 'id', 'text', and 'annotations' (annotator ids and the label assigned by them)
* a small unlabeled dataset containing 'id', 'text' and 'annotators' (annotator ids)

For the *development phase* of subtask 1, we provide all participants with the following data:
* the labeled training set containing 'id', 'text', and 'annotations' (annotator ids and the label assigned by them)
* the unlabeled dev set containing 'id', 'text' and 'annotators' (annotator ids)

Both files are in JSONL format (one JSON-serialized object per line) where each object is a dictionary with the following
For the *competition phase* of subtask 1, we provide
* the unlabeled test set containing 'id', 'text' and 'annotators' (annotator ids)

All of the five files are in JSONL format (one JSON-serialized object per line) where each object is a dictionary with the following
fields:

* `id`: a hash that identifies the example
Expand All @@ -46,8 +53,7 @@ fields:
* Note that the number of annotations and the specific annotators who assigned labels vary between examples
* `annotators` (only in the unlabeled dataset): an array of annotator ids who labeled the example

You can [download](download.md) the labeled and unlabeled data for the development phase and for the competition phase.

You can [download](download.md) the data for each phase as soon as the corresponding phase starts.

## Submission

Expand Down Expand Up @@ -79,21 +85,22 @@ To submit your predictions to the competition:

## Phases

* For the Development Phase, multiple submissions are allowed and they serve the purpose of developing and improving the model(s).
* For the Test Phase, participants may only submit a limited number of times. Please note that only the latest valid submission determines the final task ranking.
* For the *trial phase*, multiple submissions are allowed for getting to know the problem and the subtask.
* For the *development phase*, multiple submissions are allowed and they serve the purpose of developing and improving the model(s).
* For the *competition phase*, participants may only submit a limited number of times. Please note that only the latest valid submission determines the final task ranking.

## Evaluation

### Evaluation Data
System performance on all five predicted labels (`bin_maj`, `bin_one`, `bin_all`, `multi_maj`, `disagree_bin`) is evaluated using F1 macro score
over all classes.

For the Development Phase, systems will be evaluated on the development data labels. For the Test Phase, systems will be evaluated on the test labels. The development data is available [add link](add-link). The test sets will be available as soon as the corresponding test phase starts.
The final `score` which is used for ranking the submissions is calculated as the unweighted average over all 5 scores.

### Evaluation Metrics

TBD

## Submission errors
## Submission errors and warnings

A submission is successful, if it has the submission status 'finished'. 'Failed' submissions can be investigated for error sources by clicking at '?' next to 'failed' and looking at LOGS > scoring logs > stderr.


If you experience any issue such as a submission file stuck with a "scoring" status, please cancel the submission and try again. In case the problem persists you can contact us using the Forum.

Following a successful submission, you need to refresh the web page in order to see your score and your result on the leaderboard.
133 changes: 93 additions & 40 deletions site/subtask2.md
Original file line number Diff line number Diff line change
@@ -1,56 +1,109 @@
# GermEval2024 GerMS - Subtask 2

For the development phase of subtask 1, we provide all participants with the following data:
* the labeled training set containing 'id', 'text', and 'annotations'
* the unlabeled dev set containing 'id' and 'annotations'

You can download the data [add-link](link-tbd)
IMPORTANT: please note that there is a [closed](closed-track.md) and an [open](open-track.md) track for this subtask!

In subtask 2 the goal is to predict the distribution for each text in a dataset where the distribution is derived from the original distribution of labels assigned by several human annotators.

The human annotators assigned (according to the [annotation guidelines](guidelines.md) )
the strength of misogyny/sexism present in the given text via the following labels:

* `0-Kein`: no sexism/misogyny present
* `1-Gering`: mild sexism/misogyny
* `2-Vorhanden`: sexism/misogyny present
* `3-Stark`: strong sexism/misogyny
* `4-Extrem`: extreme sexism/misogyny

While the annotation guidelines define what kind of sexism/misogyny should get annotated, there has been made no attempt to give rules about how to decide on the strength. For this reason, if an annotator decided that sexism/misogyny is present in a text, the strength assigned is a matter of personal judgement.

The distributions to predict in subtask 2 are
* the binary distribution ('dist_bin'): two values are predicted, which add up to 1.
* `dist_bin_0`: refers to the portion of annotators labeling the text as 'not-sexist' (`0-Kein`)
* `dist_bin_1`: refers to the portion of annotators labeling the text as 'sexist' (`1-Gering`, `2-Vorhanden`, `3-Stark`, or `4-Extrem`).
* the multi score distribution ('dist_multi'): five values are predicted, which add up to 1.
* `dist_multi_0`: predict the portion of annotators labeling the text as `0-Kein`.
* `dist_multi_1`: predict the portion of annotators labeling the text as `1-Gering`.
* `dist_multi_2`: predict the portion of annotators labeling the text as `2-Vorhanden`.
* `dist_multi_3`: predict the portion of annotators labeling the text as `3-Stark`.
* `dist_multi_4`: predict the portion of annotators labeling the text as `4-Extrem`.

## Data

For the *trial phase* of subtask 1, we provide a small dataset, containing
* a small labeled dataset containing 'id', 'text', and 'annotations' (annotator ids and the label assigned by them)
* a small unlabeled dataset containing 'id', 'text' and 'annotators' (annotator ids)

For the *development phase* of subtask 1, we provide all participants with the following data:
* the labeled training set containing 'id', 'text', and 'annotations' (annotator ids and the label assigned by them)
* the unlabeled dev set containing 'id', 'text' and 'annotators' (annotator ids)

For the *competition phase* of subtask 1, we provide
* the unlabeled test set containing 'id', 'text' and 'annotators' (annotator ids)

All of the five files are in JSONL format (one JSON-serialized object per line) where each object is a dictionary with the following
fields:

* `id`: a hash that identifies the example
* `text`: the text to classify. The text can contain arbitrary Unicode and new lines
* `annotations` (only in the labeled dataset): an array of dictionaries which contain the following key/value pairs:
* `user`: a string in the form "A003" which is an anonymized id for the annotator who assigned the label
* `label`: the label assigned by the annotator
* Note that the number of annotations and the specific annotators who assigned labels vary between examples
* `annotators` (only in the unlabeled dataset): an array of annotator ids who labeled the example

You can [download](download.md) the data for each phase as soon as the corresponding phase starts.

## Submission

Your submission must be a file in TSV (tab separated values) format which contains the following columns in any order:

* `id`: the id of the example in the unlabeled dataset for which the predictions are submitted
* `dist_bin_0`: prediction of one value between 0 and 1 (all `dist_bin` values need to add up to 1).
* `dist_bin_1`: prediction of one value between 0 and 1 (all `dist_bin` values need to add up to 1).
* `dist_multi_0`: prediction of one value between 0 and 1 (all `dist_multi` values need to add up to 1).
* `dist_multi_1`: prediction of one value between 0 and 1 (all `dist_multi` values need to add up to 1).
* `dist_multi_2`: prediction of one value between 0 and 1 (all `dist_multi` values need to add up to 1).
* `dist_multi_3`: prediction of one value between 0 and 1 (all `dist_multi` values need to add up to 1).
* `dist_multi_4`: prediction of one value between 0 and 1 (all `dist_multi` values need to add up to 1).

Note that the way how you derive those values is up to you (as long as the rules for the closed or open tracks are followed):

* you can train several models or a single model to get the predicted distribution
* you can derive the mode-specific training set in any way from the labeled training data
* you can use the information of which annotator assigned the label or ignore that

To submit your predictions to the competition:

* the file MUST have the file name extension `.tsv`
* the TSV file must get compressed into a ZIP file with extension `.zip`
* the ZIP file should then get uploaded as a submission to the correct competition
* !! Please make sure you submit to the competition that corresponds to the correct subtask (1 or 2) and correct track (Open or Closed)!
* under "My Submissions" make sure to fill out the form and:
* enter the name of your team which has been registered for the competition
* give a name to your method
* confirm that you have checked that you are indeed submitting to the correct competition for the subtask and track desired


**note**: do we provide example submissions?

**Goal** of this subtask are to predict both (i) the binary distribution ('dist_bin'), and (ii) the multi score distribution ('dist_multi'):
* dist_bin: predict the percentage of annotators choosing sexist ('dist_bin_1') and not sexist ('dist_bin_0')
* dist_multi: predict the percentage of annotators for each possible label, so a list of 5 values [0,1] for the scores 0 ('dist_multi_0'), 1 ('dist_multi_1'), 2 ('dist_multi_2'), 3 ('dist_multi_3'), 4 ('dist_multi_4')

Both values of 'dist_bin' need to add up to 1 and all 5 values of 'dist_multi' need to add up to 1.
## Phases

For each submission:
* save your predictions to a separate csv file. The file needs to contain the following columns:
* 'id': the unique ID of each text, as specified in the dev/test data
* 'dist_bin_0'
* 'dist_bin_1'
* 'dist_multi_0'
* 'dist_multi_1'
* 'dist_multi_2'
* 'dist_multi_3'
* 'dist_multi_4'
* compress this csv file into a zip file.
* under My Submissions, fill out the submission form and submit the zip file.
* For the *trial phase*, multiple submissions are allowed for getting to know the problem and the subtask.
* For the *development phase*, multiple submissions are allowed and they serve the purpose of developing and improving the model(s).
* For the *competition phase*, participants may only submit a limited number of times. Please note that only the latest valid submission determines the final task ranking.

**note**: do we want submissions as a .csv file or as a .json file?
## Evaluation

For the Development Phase, multiple submissions are allowed and they serve the purpose of developing the model.
System performance on subtask 2 is evaluated using the Jensen-Shannon distance for both (i) the prediction of the binary distribution, and (ii) the prediction of the multi score distribution. We chose the Jensen-Shannon distance as it is a standard method for measuring the similarity between two probability distributions and it is a proper
distance metric which is between 0 and 1. It is the square root of the Jensen-Shannon divergence, which is based on the Kullback-Leibler divergence.

For the Test Phase, participants may only submit two times, to allow for a mistake in the first submission. Please note that only the latest valid submission determines the final task ranking.
The overall score which is used for ranking the submissions is calculated as the unweighted average between the two JS-distances.

**note**: for EDOS, they restricted the submission in the test phase to 2. Do we want that as well?

## Submission errors
## Submission errors and warnings

A submission is successful, if it has the submission status 'finished'. 'Failed' submissions can be investigated for error sources by clicking at '?' next to 'failed' and looking at LOGS > scoring logs > stderr.

If you experience any issue such as a submission file stuck with a "scoring" status, please cancel the submission and try again. In case the problem persists you can contact us using the Forum.

## Evaluation

### Evaluation Data

For the Development Phase, systems will be evaluated on the development data labels. For the Test Phase, systems will be evaluated on the test labels. The development data is available [add link](add-link). The test sets will be available as soon as the corresponding test phase starts.

### Evaluation Metrics

System performance on subtask 2 (both the open and the closed track) is evaluated using the Jensen-Shannon distance for both (i) the prediction of the binary distribution, and (ii) the prediction of the multi score distribution. We chose the Jensen-Shannon distance as it is a standard method for measuring
the similarity between two probability distributions. It is the square root of the Jensen-Shannon divergence, which is based on the Kullback-Leibler divergence, but is symmetric and always has a finite value.

We compute the Jensen-Shannon distance using scipy's spatial distance function. The full evaluation script on CodaBench is available on GitHub [add-link](add-link).
Following a successful submission, you need to refresh the submission page in order to see your score and your result on the leaderboard.

**note**: do we publish the evaluation script when the competition starts or when it has ended?

0 comments on commit f43cb18

Please sign in to comment.