generated from HugoBlox/theme-research-group
-
Notifications
You must be signed in to change notification settings - Fork 2
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
1 changed file
with
74 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,74 @@ | ||
--- | ||
# Documentation: https://wowchemy.com/docs/managing-content/ | ||
|
||
title: "Seminar: \"Atom-based Approaches to Creating Interpretable NLI Models\"" | ||
# event: | ||
# event_url: | ||
location: Abacws | ||
# address: | ||
# street: | ||
# city: | ||
# region: | ||
# postcode: | ||
# country: | ||
summary: Talk by [Joe Stacey](https://www.linkedin.com/in/joe-stacey-74572754/) (Apple/Imperial College London) | ||
abstract: "Joe's talk will focus on **atomic inference**, an approach to model interpretability that involves decomposing a task into discrete atoms, before making predictions for each individual atom and combining these atom-level predictions using interpretable rules. Unlike most interpretability methods, which identify influential features without any guarantees of faithfulness, atomic inference identifies exactly which components of an input are responsible for each model decision. While early work applied atomic inference to simple datasets such as SNLI, these methods have now proven to be successful on challenging datasets such as ANLI. Joe will share why he's excited by these methods, and will discuss some open challenges that remain for future work. | ||
Joe will also discuss some of his recent work on model robustness, including his 2024 ACL paper about creating more robust NLI models by generating synthetic data with LLMs. Joe will highlight some of the data quality issues with LLM-generated data, and will discuss ideas for further research in this area." | ||
|
||
# Talk start and end times. | ||
# End time can optionally be hidden by prefixing the line with `#`. | ||
date: 2024-10-03T13:00:00Z | ||
date_end: 2024-10-03T14:00:00Z | ||
all_day: false | ||
|
||
# Schedule page publish date (NOT event date). | ||
publishDate: 2024-09-27T00:00:00Z | ||
|
||
authors: [camachocolladosj] | ||
tags: [] | ||
|
||
# Is this a featured event? (true/false) | ||
featured: false | ||
|
||
# Featured image | ||
# To use, add an image named `featured.jpg/png` to your page's folder. | ||
# Focal points: Smart, Center, TopLeft, Top, TopRight, Left, Right, BottomLeft, Bottom, BottomRight. | ||
image: | ||
caption: "" | ||
focal_point: "" | ||
preview_only: false | ||
|
||
# Custom links (optional). | ||
# Uncomment and edit lines below to show custom links. | ||
# links: | ||
# - name: Follow | ||
# url: https://twitter.com | ||
# icon_pack: fab | ||
# icon: twitter | ||
|
||
# Optional filename of your slides within your event's folder or a URL. | ||
url_slides: | ||
|
||
url_code: | ||
url_pdf: | ||
url_video: | ||
|
||
# Markdown Slides (optional). | ||
# Associate this event with Markdown slides. | ||
# Simply enter your slide deck's filename without extension. | ||
# E.g. `slides = "example-slides"` references `content/slides/example-slides.md`. | ||
# Otherwise, set `slides = ""`. | ||
slides: "" | ||
|
||
# Projects (optional). | ||
# Associate this post with one or more of your projects. | ||
# Simply enter your project's folder or file name without extension. | ||
# E.g. `projects = ["internal-project"]` references `content/project/deep-learning/index.md`. | ||
# Otherwise, set `projects = []`. | ||
projects: [] | ||
--- | ||
|
||
**Invited Speaker:** [Joe Stacey](https://www.linkedin.com/in/joe-stacey-74572754/) (Apple/Imperial College London) | ||
|
||
**Bio:** | ||
Joe is an Apple AI/ML scholar in the 4th year of his PhD at Imperial College London, supervised by Marek Rei. Joe's research involves creating more robust and interpretable NLP models, focusing on the task of Natural Language Inference. Prior to his PhD, Joe worked at PwC for 6 years as an analytics consultant, and also previously taught maths in a challenging secondary school. |