Skip to content

PierrickLeroy/bertaphore

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Bertaphore

Overview of projects

Attentions in language
This projects aims at merging topological methods with language analysis. The application would be to reason about language using models like transformers. Making a language model interpretable could help grasp more about the language for the human as well. In this project, we study the attention of transformers.
Presentation

Surprise detection
This project is an attempt to detect parts of a text that is surprising. It is close to anomaly detection. A part of a psychoanalyst job is to catch "anomalies". Here an anomaly is something that bears a particular interest ("parole pleine et vraie" lit. trans.: true and full speech)

Metaphor generation

Motivation

Human teach machines to make sense of language, machine are interpreted to understand language. We code something to better understand our own code.

Projects and discussion

Surprise detection

See the notebook

Discussion The current method seem to give good results when the surprise is located in a single word. If there are other parts in the text that relate to the "surprising" vocabulary or idea, the detector does not seem to work. This is due to the fact that the detection is framed as a Masked Language Modeling task, with only one word masked at a time. This means that for an idea to be truly pivotal and therefore detected it must be condensed in one word.

Next steps

  • Single words suprise detection for simple sentences
  • Test on a few corpora of dialogue
  • Add logits feature to have an alternative for the rank score

Attentions in language

Discussion

Next steps

About

A project to detect surprise in texts.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published