generated from mmistakes/mm-github-pages-starter
-
Notifications
You must be signed in to change notification settings - Fork 18
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[GH Actions] automatic-add-update-publication (#257)
Co-authored-by: arkilpatel <[email protected]>
- Loading branch information
1 parent
19de5f5
commit 592f776
Showing
1 changed file
with
19 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,19 @@ | ||
--- | ||
title: Evaluating In-Context Learning of Libraries for Code Generation | ||
author: Arkil Patel | ||
names: Arkil Patel, Siva Reddy, Dzmitry Bahdanau, Pradeep Dasigi | ||
venue: Preprint | ||
link: https://arxiv.org/abs/2311.09635 | ||
categories: Publications | ||
|
||
--- | ||
|
||
*{{ page.names }}* | ||
|
||
**{{ page.venue }}** | ||
|
||
{% include display-publication-links.html pub=page %} | ||
|
||
## Abstract | ||
|
||
Contemporary Large Language Models (LLMs) exhibit a high degree of code generation and comprehension capability. A particularly promising area is their ability to interpret code modules from unfamiliar libraries for solving user-instructed tasks. Recent work has shown that large proprietary LLMs can learn novel library usage in-context from demonstrations. These results raise several open questions: whether demonstrations of library usage is required, whether smaller (and more open) models also possess such capabilities, etc. In this work, we take a broader approach by systematically evaluating a diverse array of LLMs across three scenarios reflecting varying levels of domain specialization to understand their abilities and limitations in generating code based on libraries defined in-context. Our results show that even smaller open-source LLMs like Llama-2 and StarCoder demonstrate an adept understanding of novel code libraries based on specification presented in-context. Our findings further reveal that LLMs exhibit a surprisingly high proficiency in learning novel library modules even when provided with just natural language descriptions or raw code implementations of the functions, which are often cheaper to obtain than demonstrations. Overall, our results pave the way for harnessing LLMs in more adaptable and dynamic coding environments. |