각종 논문들을 읽고 리뷰한 내용들을 모아놓은 곳입니다.
깃허브 마크다운 LaTex 렌더링 이슈로 인헤, 각 설명 문서 당 .pdf와 .md 두 개의 버전으로 올렸습니다.
또한, 아래의 링크에서도 확인해보실 수 있습니다.
Adam: A Method for Stochastic Optimization (Only Implement)
BERT: Pre training of Deep Bidirectional Transformers for Language Understanding
Big Bird: Transformers for Longer Sequences
Dense Passage Retrieval for Open-Domain Question Answering
Direct Fact Retrieval from Knowledge Graphs without Entity Linking
Effective Approaches to Attention based Neural Machine Translation
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
FEVER- a large-scale dataset for Fact Extraction and VERification
Finetuned Language Models Are Zero-Shot Learners
Generation-Augmented Retrieval for Open-Domain Question Answering
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Improving Language Understanding by Generative Pre Training
Language Models are Unsupervised Multitask Learners
Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering
Multilingual Language Processing From Bytes
Multitask Prompted Training Enables Zero-Shot Task Generalization
Neural Machine Translation by Jointly Learning to Align and Translate
Query Expansion by Prompting Large Language Models
REALM: Retrieval-Augmented Language Model Pre-Training
REPLUG: Retrieval-Augmented Black-Box Language Models
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Self-Attention with Relative Position Representations
The Natural Language Decathlon- Multitask Learning as Question Answering
Training language models to follow instructions with human feedback