Skip to content

Commit

Permalink
Update abstract
Browse files Browse the repository at this point in the history
  • Loading branch information
feralvam committed Feb 26, 2024
1 parent 1ae8316 commit d493aab
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion content/event/2024-02-29/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ location: Abacws
# postcode:
# country:
summary: Talk by [Hosein Mohebbi](https://hmohebbi.github.io/) (Tilburg University, Netherlands)
abstract: "In both text and speech processing, variants of the Transformer architecture have become ubiquitous. The key advantage of this neural network topology lies in the modeling of pairwise relations between elements of the input (tokens): the representation of a token at a particular Transformer layer is a function of the weighted sum of the transformed representations of all the tokens in the previous layer. This feature of Transformers is known as 'context mixing' and understanding how it functions in specific model layers is crucial for tracing the overall information flow. In this talk, I will first introduce Value Zeroing, as measure of context mixing, and show that the token importance scores obtained through Value Zeroing offer better interpretations compared to previous analysis methods in terms of plausibility, faithfulness, and agreement with probing. Next, by applying Value Zeroing to models of spoken language, we will see how patterns of context mixing can reveal striking differences between the behavior of encoder-only and encoder-decoder speech Transformers."
abstract: "In both text and speech processing, variants of the Transformer architecture have become ubiquitous. The key advantage of this neural network topology lies in the modeling of pairwise relations between elements of the input (tokens): the representation of a token at a particular Transformer layer is a function of the weighted sum of the transformed representations of all the tokens in the previous layer. This feature of Transformers is known as \'context mixing\' and understanding how it functions in specific model layers is crucial for tracing the overall information flow. In this talk, I will first introduce Value Zeroing, as measure of context mixing, and show that the token importance scores obtained through Value Zeroing offer better interpretations compared to previous analysis methods in terms of plausibility, faithfulness, and agreement with probing. Next, by applying Value Zeroing to models of spoken language, we will see how patterns of context mixing can reveal striking differences between the behavior of encoder-only and encoder-decoder speech Transformers."

# Talk start and end times.
# End time can optionally be hidden by prefixing the line with `#`.
Expand Down

0 comments on commit d493aab

Please sign in to comment.