This repo contains all materials for WSDM 2023 tutorial: Proactive Conversational Agents
- Time: 8.30am-12.00pm, Feb 27th, 2023(Singapore Time)
- Location: Carlton Hotel Singapore, 76 Bras Basah Road, Singapore
Conversational agents, or commonly known as dialogue systems, have gained escalating popularity in recent years. Their widespread applications support conversational interactions with users and accomplishing various tasks as personal assistants. However, one key weakness in existing conversational agents is that they only learn to passively answer user queries via training on pre-collected and manually-labeled data. Such passiveness makes the interaction modeling and system-building process relatively easier, but it largely hinders the possibility of being human-like hence lowering the user engagement level. In this tutorial, we introduce and discuss methods to equip conversational agents with the ability to interact with end users in a more proactive way. This three-hour tutorial is divided into three parts and includes two interactive exercises. It reviews and presents recent advancements on the topic, focusing on automatically expanding ontology space, actively driving conversation by asking questions or strategically shifting topics, and retrospectively conducting response quality control.
Please click here
TBD
Discovering New Intents Via Constrained Deep Adaptive Clustering With Cluster Refinement AAAI 2020. PDF
Discovering New Intents With Deep Aligned Clustering AAAI 2021. PDF
MMConv: An Environment For Multimodal Conversational Search Across Multiple Domains SIGIR 2021. PDF
Bridge To Target Domain By Prototypical Contrastive Learning And Label Confusion: Re-Explore Zero-Shot Learning For Slot Filling EMNLP 2021. PDF
Semi-Supervised New Slot Discovery With Incremental Clustering EMNLP Findings 2022. PDF
Novel Slot Detection: A Benchmark for Discovering Unknown Slot Types In The Task-Oriented Dialogue System ACL 2021. PDF
Automatic Intent-Slot Induction For Dialogue Systems WWW 2021. PDF
Reflecting On Experiences for Response Generation MM 2022. PDF
Generate rather than Retrieve: Large Language Models are Strong Context Generators ICLR 2023. PDF
Copy Is All You Need ICLR 2023. PDF
Cross Copy Network for Dialogue Generation ACL 2020. PDF
Multimodal Chain-of-Thought Reasoning in Language Models ARXIV 2023. PDF
Automatic Chain of Thought Prompting in Large Language Models ARXIV 2023. PDF
Learning To Ask Critical Questions for Assisting Product Search SIGIR Workshop Ecom 2022. PDF
Learning To Ask Appropriate Questions In Conversational Recommendation SIGIR 2021. PDF
Towards Question-Based Recommender Systems SIGIR 2020. PDF
Learning to ask: Question-Based Sequential Bayesian Product Search CIKM 2019. PDF
Topic-Guided Conversational Recommender In Multiple Domain TKDE 2020. PDF
Target-Guided Open-Domain Conversation ACL 2019. PDF
Conversational Graph Grounded Policy Learning For Open-Domain Conversation Generation ACL 2020. PDF
Balancing Reinforcement Learning Training Experiences In Interactive Information Retrieval SIGIR 2020. PDF
Topic Shift Detection for Mixed Initiative Response SGDD 2021. PDF
High-Quality Dialogue Diversification By Intermittent Short Extension Ensembles ACL Findings 2021. PDF
The information bottleneck method ACCCC 1999. PDF
Agreement-On-The-Line: Predicting The Performance Of Neural Networks Under Distribution Shift NIPS 2022. PDF
Accuracy On The Line: On The Strong Correlation Between Out-Of-Distribution And In Distribution Generalization ICML 2021 PDF
SafetyKit: First Aid For Measuring Safety In Open-Domain Conversational Systems ACL 2021. PDF
Just Say No: Analyzing The Stance Of Neural Dialogue Generation In Offensive Contexts EMNLP 2021. PDF
Build it Break It Fix It For Dialogue Safety: Robustness From Adversarial Human Attack EMNLP 2019. PDF
Mitigating Harm In Language Models With Conditional-Likelihood Filtration ARXIV 2021. PDF
Balancing Reinforcement Learning Training Experiences In Interactive Information Retrieval SIGIR 2020. PDF
Deep Reinforcement Learning In A Handful Of Trials Using Probabilistic Dynamics Models NIPS 2018. PDF
Conservative Q-Learning For Offline Reinforcement Learning NIPS 2020. PDF
Proximal Policy Optimization Algorithms ARXIV 2017. PDF
High-Quality Diversification for Task-Oriented Dialogue Systems ACL Findings 2021. PDF
CHAI: A CHatbot AI for Task-Oriented Dialogue with Offline Reinforcement Learning NAACL 2022. PDF
Training language models to follow instructions with human feedback NIPS 2022. PDF
Language Models are Few-Shot Learners NIPS 2020. PDF
BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation ICML 2022. PDF
BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models ARXIV 2023. PDF
Lizi Liao is an Assistant Professor at the Singapore Management University, School Of Computing And Information System, where she leads CoAgent research group. She received her Ph.D. from National University of Singapore, advised by Professor Tat-Seng Chua. Dr.Liao has published more than 20 peer-reviewed research papers which include but not limitted to recommendation system, natural language understanding, dialogue modeling and multi-modal learning with more than 5000 citations.
Dr.Grace Hui Yang is an Associate Professor in the Department of Computer Science at Georgetown University. Dr. Yang is leading the InfoSense (Information Retrieval and Sense-Making) group at Georgetown University, Washington D.C., U.S.A. Dr. Yang obtained her Ph.D. from the Language Technologies Institute, Carnegie Mellon University in 2011. Dr. Yang’s current research interests include deep reinforcement learning, conversational artificial intelligence, dynamic information retrieval, search engine evaluation, privacy-preserving information retrieval, and information organization. Prior to this, she conducted research on question answering, ontology construction, near-duplicate detection, multimedia information retrieval, and opinion and sentiment detection.
Chirag Shah is an Associate Professor in the Information School (iSchool). He is the Founding Director of InfoSeeking Lab, which focuses on issues related to information seeking, human-computer interaction (HCI), and social media, supported by grants from National Science Foundation (NSF), National Institute of Health (NIH), Institute of Museum and Library Services (IMLS), Amazon, Google, and Yahoo. Shah’s research involves studies of interactive information retrieval/seeking and recommender systems, addressing various problems with user studies in lab and field, and building computational models with machine learning techniques. He has served as a consultant to the United Nations Data Analytics on various Data Science projects involving social and political issues, peacekeeping, climate change, and energy. He also holds a position of Amazon Scholar, working on cutting-edge research problems on personalization and recommendation with Amazon.