Skip to content

Latest commit

 

History

History
1140 lines (891 loc) · 61.5 KB

transfer_learning.md

File metadata and controls

1140 lines (891 loc) · 61.5 KB

Methods Summary of Parameter-Efficient Finetuning

Catalogue

*Prompt Tuning*

(ACL2021_Prefix-Tuning) Prefix-Tuning: Optimizing Continuous Prompts for Generation.
Xiang Lisa Li, Percy Liang.
[paper]

(EMNLP2021_PEPT) The Power of Scale for Parameter-Efficient Prompt Tuning.
Brian Lester, Rami Al-Rfou, Noah Constant.
[paper]

(NeurIPS2021_Frozen) Multimodal Few-Shot Learning with Frozen Language Models.
Maria Tsimpoukelli, Jacob Menick, Serkan Cabi, S. M. Ali Eslami, Oriol Vinyals, Felix Hill.
[paper]

(IJCV2022_CoOp) Learning to Prompt for Vision-Language Models.
Kaiyang Zhou, Jingkang Yang, Chen Change Loy, Ziwei Liu.
[paper] [code]

(ACL2022_PPT) PPT: Pre-trained Prompt Tuning for Few-shot Learning.
Yuxian Gu, Xu Han, Zhiyuan Liu, Minlie Huang.
[paper] [code]

(arXiv2021_CPT) CPT: Colorful Prompt Tuning for Pre-trained Vision-Language Models.
Yuan Yao, Ao Zhang, Zhengyan Zhang, Zhiyuan Liu, Tat-Seng Chua, Maosong Sun.
[paper] [code]

(CVPR2022_DenseCLIP) DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting.
Yongming Rao, Wenliang Zhao, Guangyi Chen, Yansong Tang, Zheng Zhu, Guan Huang, Jie Zhou, Jiwen Lu.
[paper] [code]

(ECCV2022_Efficient-Prompt) Prompting Visual-Language Models for Efficient Video Understanding.
Chen Ju, Tengda Han, Kunhao Zheng, Ya Zhang, Weidi Xie.
[paper] [code]

(CVPR2022_L2P) Learning to Prompt for Continual Learning.
Zifeng Wang, Zizhao Zhang, Chen-Yu Lee, Han Zhang, Ruoxi Sun, Xiaoqi Ren, Guolong Su, Vincent Perot, Jennifer Dy, Tomas Pfister.
[paper] [code]

(ICML2022_Language-Planners) Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents.
Wenlong Huang, Pieter Abbeel, Deepak Pathak, Igor Mordatch.
[paper] [code]

(NeurIPS2022_CoT) Chain-of-Thought Prompting Elicits Reasoning in Large Language Models.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou.
[paper]

(CVPR2022_CoCoOp) Conditional Prompt Learning for Vision-Language Models.
Kaiyang Zhou, Jingkang Yang, Chen Change Loy, Ziwei Liu.
[paper] [code]

(ECCV2022_VPT) Visual Prompt Tuning.
Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge Belongie, Bharath Hariharan, Ser-Nam Lim.
[paper] [code]

(arXiv2022_Visual-Prompting) Exploring Visual Prompts for Adapting Large-Scale Models.
Hyojin Bahng, Ali Jahanian, Swami Sankaranarayanan, Phillip Isola.
[paper] [code]

(ECCV2022_DualPrompt) DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning.
Zifeng Wang, Zizhao Zhang, Sayna Ebrahimi, Ruoxi Sun, Han Zhang, Chen-Yu Lee, Xiaoqi Ren, Guolong Su, Vincent Perot, Jennifer Dy, Tomas Pfister.
[paper] [code]

(EMNLP2022_ATTEMPT) ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts.
Akari Asai, Mohammadreza Salehi, Matthew E. Peters, Hannaneh Hajishirzi.
[paper] [code]

(NeurIPS2022_P2P) P2P: Tuning Pre-trained Image Models for Point Cloud Analysis with Point-to-Pixel Prompting.
Ziyi Wang, Xumin Yu, Yongming Rao, Jie Zhou, Jiwen Lu.
[paper] [code]

(NeurIPS2022_PromptGen) Generative Visual Prompt: Unifying Distributional Control of Pre-Trained Generative Models.
Chen Henry Wu, Saman Motamed, Shaunak Srivastava, Fernando De la Torre.
[paper] [code]

(NeurIPS2022_ScienceQA) Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering.
Pan Lu, Swaroop Mishra, Tony Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, Ashwin Kalyan.
[paper] [code]

(ICML2022_HyperPrompt) HyperPrompt: Prompt-based Task-Conditioning of Transformers.
Yun He, Steven Zheng, Yi Tay, Jai Gupta, Yu Du, Vamsi Aribandi, Zhe Zhao, Yaguang Li, Zhao Chen, Donald Metzler, Heng-Tze Cheng, Ed H. Chi.
[paper]

(ICLR2023_Promptagator) Promptagator: Few-shot Dense Retrieval From 8 Examples.
Zhuyun Dai, Vincent Y. Zhao, Ji Ma, Yi Luan, Jianmo Ni, Jing Lu, Anton Bakalov, Kelvin Guu, Keith B. Hall, Ming-Wei Chang.
[paper]

(ICLR2023_PromptPG) Dynamic Prompt Learning via Policy Gradient for Semi-structured Mathematical Reasoning.
Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark, Ashwin Kalyan.
[paper] [code]

(CVPR2023_VPT-GTL) Visual Prompt Tuning for Generative Transfer Learning.
Kihyuk Sohn, Yuan Hao, José Lezama, Luisa Polania, Huiwen Chang, Han Zhang, Irfan Essa, Lu Jiang.
[paper] [code]

(ICLR2023_LPT) LPT: Long-tailed Prompt Tuning for Image Classification.
Bowen Dong, Pan Zhou, Shuicheng Yan, Wangmeng Zuo.
[paper]

(ICLR2023_PLOT) PLOT: Prompt Learning with Optimal Transport for Vision-Language Models.
Guangyi Chen, Weiran Yao, Xiangchen Song, Xinyue Li, Yongming Rao, Kun Zhang.
[paper] [code]

(CVPR2023_MaPLe) MaPLe: Multi-modal Prompt Learning.
Muhammad Uzair Khattak, Hanoona Rasheed, Muhammad Maaz, Salman Khan, Fahad Shahbaz Khan.
[paper] [code]

(arXiv2023_DePT) Visual Prompt Tuning for Test-time Domain Adaptation.
Yunhe Gao, Xingjian Shi, Yi Zhu, Hao Wang, Zhiqiang Tang, Xiong Zhou, Mu Li, Dimitris N. Metaxas.
[paper]

(ICLR2023_Description) Visual Classification via Description from Large Language Models.
Sachit Menon, Carl Vondrick.
[paper]

(ICLR2023_reliability) Prompting GPT-3 To Be Reliable.
Chenglei Si, Zhe Gan, Zhengyuan Yang, Shuohang Wang, Jianfeng Wang, Jordan Boyd-Graber, Lijuan Wang.
[paper] [code]

(arXiv2022_ProSFDA) ProSFDA: Prompt Learning based Source-free Domain Adaptation for Medical Image Segmentation.
Shishuai Hu, Zehui Liao, Yong Xia.
[paper] [code]

(CVPR2023_ILM-VP) Understanding and Improving Visual Prompting: A Label-Mapping Perspective.
Aochuan Chen, Yuguang Yao, Pin-Yu Chen, Yihua Zhang, Sijia Liu.
[paper] [code]

(CVPR2023_TaI-DPT) Texts as Images in Prompt Tuning for Multi-Label Image Recognition.
Zixian Guo, Bowen Dong, Zhilong Ji, Jinfeng Bai, Yiwen Guo, Wangmeng Zuo.
[paper] [code]

(CVPR2023_VoP) VoP: Text-Video Co-operative Prompt Tuning for Cross-Modal Retrieval.
Siteng Huang, Biao Gong, Yulin Pan, Jianwen Jiang, Yiliang Lv, Yuyuan Li, Donglin Wang.
[paper] [code]

(AAAI2023_CLIP-ReID) CLIP-ReID: Exploiting Vision-Language Model for Image Re-Identification without Concrete Text Labels.
Siyuan Li, Li Sun, Qingli Li.
[paper] [code]

(CVPR2023_Painter) Images Speak in Images: A Generalist Painter for In-Context Visual Learning.
Xinlong Wang, Wen Wang, Yue Cao, Chunhua Shen, Tiejun Huang.
[paper] [code]

(AAAI2023_VDP) Decorate the Newcomers: Visual Domain Prompt for Continual Test Time Adaptation.
Yulu Gan, Yan Bai, Yihang Lou, Xianzheng Ma, Renrui Zhang, Nian Shi, Lin Luo.
[paper]

(CVPR2023_PIVOT) PIVOT: Prompting for Video Continual Learning.
Andrés Villa, Juan León Alcázar, Motasem Alfarra, Kumail Alhamoud, Julio Hurtado, Fabian Caba Heilbron, Alvaro Soto, Bernard Ghanem.
[paper]

(TMLR2024_EVP) Unleashing the Power of Visual Prompting At the Pixel Level.
Junyang Wu, Xianhang Li, Chen Wei, Huiyu Wang, Alan Yuille, Yuyin Zhou, Cihang Xie.
[paper] [code]

(ACL2023_OFA-PT) Prompt Tuning for Unified Multimodal Pretrained Models.
Hao Yang, Junyang Lin, An Yang, Peng Wang, Chang Zhou.
[paper] [code]

(arXiv2023_MM-CoT) Multimodal Chain-of-Thought Reasoning in Language Models.
Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, Alex Smola.
[paper] [code]

(ICCV2023_PTUnifier) Towards Unifying Medical Vision-and-Language Pre-training via Soft Prompts.
Zhihong Chen, Shizhe Diao, Benyou Wang, Guanbin Li, Xiang Wan.
[paper] [code]

(CVPR2023_HiPro) Hierarchical Prompt Learning for Multi-Task Learning.
Yajing Liu, Yuning Lu, Hao Liu, Yaozu An, Zhuoran Xu, Zhuokun Yao, Baofeng Zhang, Zhiwei Xiong, Chenguang Gui.
[paper]

(NeurIPS2023_CVP) Convolutional Visual Prompt for Robust Visual Perception.
Yun-Yun Tsai, Chengzhi Mao, Junfeng Yang.
[paper]

(CVPR2023_VE-Prompt) Visual Exemplar Driven Task-Prompting for Unified Perception in Autonomous Driving.
Xiwen Liang, Minzhe Niu, Jianhua Han, Hang Xu, Chunjing Xu, Xiaodan Liang.
[paper]

(CVPR2023_CaFo) Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners.
Renrui Zhang, Xiangfei Hu, Bohao Li, Siyuan Huang, Hanqiu Deng, Hongsheng Li, Yu Qiao, Peng Gao.
[paper] [code]

(arXiv2023_ComPro) Learning Combinatorial Prompts for Universal Controllable Image Captioning.
Zhen Wang, Jun Xiao, Yueting Zhuang, Fei Gao, Jian Shao, Long Chen.
[paper]

(CVPR2023_DAM-VP) Diversity-Aware Meta Visual Prompting.
Qidong Huang, Xiaoyi Dong, Dongdong Chen, Weiming Zhang, Feifei Wang, Gang Hua, Nenghai Yu.
[paper] [code]

(AAAI2024_LION) LION: Implicit Vision Prompt Tuning.
Haixin Wang, Jianlong Chang, Xiao Luo, Jinan Sun, Zhouchen Lin, Qi Tian.
[paper]

(CVPR2023_ViPT) Visual Prompt Multi-Modal Tracking.
Jiawen Zhu, Simiao Lai, Xin Chen, Dong Wang, Huchuan Lu.
[paper] [code]

(arXiv2023_DPL) Decomposed Prototype Learning for Few-Shot Scene Graph Generation.
Xingchen Li, Long Chen, Guikun Chen, Yinfu Feng, Yi Yang, Jun Xiao.
[paper]

(CVPR2023_EVP) Explicit Visual Prompting for Low-Level Structure Segmentations.
Weihuang Liu, Xi Shen, Chi-Man Pun, Xiaodong Cun.
[paper] [code]

(CVPR2023_SP) Semantic Prompt for Few-Shot Image Recognition.
Wentao Chen, Chenyang Si, Zhang Zhang, Liang Wang, Zilei Wang, Tieniu Tan.
[paper]

(ICCV2023_SegGPT) SegGPT: Segmenting Everything In Context.
Xinlong Wang, Xiaosong Zhang, Yue Cao, Wen Wang, Chunhua Shen, Tiejun Huang.
[paper] [code]

(CVPR2023_Vita-CLIP) Vita-CLIP: Video and text adaptive CLIP via Multimodal Prompting.
Syed Talal Wasim, Muzammal Naseer, Salman Khan, Fahad Shahbaz Khan, Mubarak Shah.
[paper] [code]

(ICCV2023_IDPT) Instance-aware Dynamic Prompt Tuning for Pre-trained Point Cloud Models.
Yaohua Zha, Jinpeng Wang, Tao Dai, Bin Chen, Zhi Wang, Shu-Tao Xia.
[paper] [code]

(NeurIPS2023_VPGTrans) VPGTrans: Transfer Visual Prompt Generator across LLMs.
Ao Zhang, Hao Fei, Yuan Yao, Wei Ji, Li Li, Zhiyuan Liu, Tat-Seng Chua.
[paper] [code]

(arXiv2023_TreePrompt) TreePrompt: Learning to Compose Tree Prompts for Explainable Visual Grounding.
Chenchi Zhang, Jun Xiao, Lei Chen, Jian Shao, Long Chen.
[paper]

(ACL2023_APT) Towards Adaptive Prefix Tuning for Parameter-Efficient Language Model Fine-tuning.
Zhen-Ru Zhang, Chuanqi Tan, Haiyang Xu, Chengyu Wang, Jun Huang, Songfang Huang.
[paper]

(arXiv2023_APT) Approximated Prompt Tuning for Vision-Language Pre-trained Models.
Qiong Wu, Shubin Huang, Yiyi Zhou, Pingyang Dai, Annan Shu, Guannan Jiang, Rongrong Ji.
[paper]

(ICLR2024_LRR) Look, Remember and Reason: Visual Reasoning with Grounded Rationales.
Apratim Bhattacharyya, Sunny Panchal, Mingu Lee, Reza Pourreza, Pulkit Madan, Roland Memisevic.
[paper]

(ACMMM2023_Self-PT) Self-PT: Adaptive Self-Prompt Tuning for Low-Resource Visual Question Answering.
Bowen Yuan, Sisi You, Bing-Kun Bao.
[paper] [code]

(ICCV2023_E2VPT) E^2VPT: An Effective and Efficient Approach for Visual Prompt Tuning.
Cheng Han, Qifan Wang, Yiming Cui, Zhiwen Cao, Wenguan Wang, Siyuan Qi, Dongfang Liu.
[paper] [code]

(ICCV2023_PromptSwitch) Prompt Switch: Efficient CLIP Adaptation for Text-Video Retrieval.
Chaorui Deng, Qi Chen, Pengda Qin, Da Chen, Qi Wu.
[paper] [code]

(arXiv2023_DePT) DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning.
Zhengxiang Shi, Aldo Lipani.
[paper] [code]

(arXiv2023_Black-Box) Language Models as Black-Box Optimizers for Vision-Language Models.
Shihong Liu, Zhiqiu Lin, Samuel Yu, Ryan Lee, Tiffany Ling, Deepak Pathak, Deva Ramanan.
[paper] [code]

(arXiv2023_DePT) DePT: Decoupled Prompt Tuning.
Ji Zhang, Shihan Wu, Lianli Gao, Hengtao Shen, Jingkuan Song.
[paper] [code]

(arXiv2023_Point-PEFT) Point-PEFT: Parameter-Efficient Fine-Tuning for 3D Pre-trained Models.
Yiwen Tang, Ray Zhang, Zoey Guo, Dong Wang, Zhigang Wang, Bin Zhao, Xuelong Li.
[paper] [code]

(NeurIPS2023_DG-SCT) Cross-modal Prompts: Adapting Large Pre-trained Models for Audio-Visual Downstream Tasks.
Haoyi Duan, Yan Xia, Mingze Zhou, Li Tang, Jieming Zhu, Zhou Zhao.
[paper] [code]

(AAAI2024_MmAP) MmAP: Multi-modal Alignment Prompt for Cross-domain Multi-task Learning.
Yi Xin, Junlong Du, Qiang Wang, Ke Yan, Shouhong Ding.
[paper] [code]

*Adapter Tuning*

(ICML2019_Adapter-BERT) Parameter-Efficient Transfer Learning for NLP.
Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, Sylvain Gelly.
[paper] [code]

(EMNLP2019_Adapter-NMT) Simple, Scalable Adaptation for Neural Machine Translation.
Ankur Bapna, Naveen Arivazhagan, Orhan Firat.
[paper]

(NeurIPS2020_TinyTL) TinyTL: Reduce Memory, Not Parameters for Efficient On-Device Learning.
Han Cai, Chuang Gan, Ligeng Zhu, Song Han.
[paper]

(EMNLP2020_MAD-X) MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer.
Jonas Pfeiffer, Ivan Vulić, Iryna Gurevych, Sebastian Ruder.
[paper] [code]

(EACL2021_AdapterFusion) AdapterFusion: Non-Destructive Task Composition for Transfer Learning.
Jonas Pfeiffer, Aishwarya Kamath, Andreas Rücklé, Kyunghyun Cho, Iryna Gurevych.
[paper] [code]

(EMNLP2021_AdapterDrop) AdapterDrop: On the Efficiency of Adapters in Transformers.
Andreas Rücklé, Gregor Geigle, Max Glockner, Tilman Beck, Jonas Pfeiffer, Nils Reimers, Iryna Gurevych.
[paper] [code]

(ACL2021_Hyperformer) Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks.
Rabeeh Karimi Mahabadi, Sebastian Ruder, Mostafa Dehghani, James Henderson.
[paper] [code]

(NeurIPS2021_Compacter) Compacter: Efficient Low-Rank Hypercomplex Adapter Layers.
Rabeeh Karimi Mahabadi, James Henderson, Sebastian Ruder.
[paper] [code]

(ICLR2022_LoRA) LoRA: Low-Rank Adaptation of Large Language Models.
Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen.
[paper] [code]

(ECCV2022_Tip-Adapter) Tip-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling.
Renrui Zhang, Rongyao Fang, Wei Zhang, Peng Gao, Kunchang Li, Jifeng Dai, Yu Qiao, Hongsheng Li.
[paper] [code]

(CVPR2022_VL-Adapter) VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks.
Yi-Lin Sung, Jaemin Cho, Mohit Bansal.
[paper] [code]

(ICASSP2023_I-Tuning) I-Tuning: Tuning Frozen Language Models with Image for Lightweight Image Captioning.
Ziyang Luo, Zhipeng Hu, Yadong Xi, Rongsheng Zhang, Jing Ma.
[paper]

(AAAI2023_KAdaptation) Parameter-efficient Model Adaptation for Vision Transformers.
Xuehai He, Chunyuan Li, Pengchuan Zhang, Jianwei Yang, Xin Eric Wang.
[paper] [code]

(NeurIPS2022_IA3) Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning.
Haokun Liu, Derek Tam, Mohammed Muqeeth, Jay Mohta, Tenghao Huang, Mohit Bansal, Colin Raffel.
[paper] [code]

(ICLR2023_ViT-Adapter) Vision Transformer Adapter for Dense Predictions.
Zhe Chen, Yuchen Duan, Wenhai Wang, Junjun He, Tong Lu, Jifeng Dai, Yu Qiao.
[paper] [code]

(EMNLP2022_AdaMix) AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning.
Yaqing Wang, Sahaj Agarwal, Subhabrata Mukherjee, Xiaodong Liu, Jing Gao, Ahmed Hassan Awadallah, Jianfeng Gao.
[paper] [code]

(NeurIPS2022_AdaptFormer) AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition.
Shoufa Chen, Chongjian Ge, Zhan Tong, Jiangliu Wang, Yibing Song, Jue Wang, Ping Luo.
[paper] [code]

(NeurIPS2022_ST-Adapter) ST-Adapter: Parameter-Efficient Image-to-Video Transfer Learning.
Junting Pan, Ziyi Lin, Xiatian Zhu, Jing Shao, Hongsheng Li.
[paper] [code]

(arXiv2022_Convpass) Convolutional Bypasses Are Better Vision Transformer Adapters.
Shibo Jie, Zhi-Hong Deng.
[paper] [code]

(NeurIPS2022_Polyhistor) Polyhistor: Parameter-Efficient Multi-Task Adaptation for Dense Vision Tasks.
Yen-Cheng Liu, Chih-Yao Ma, Junjiao Tian, Zijian He, Zsolt Kira.
[paper] [code]

(NeurIPS2022_SSF) Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning.
Dongze Lian, Daquan Zhou, Jiashi Feng, Xinchao Wang.
[paper] [code]

(AAAI2023_FacT) FacT: Factor-Tuning for Lightweight Adaptation on Vision Transformer.
Shibo Jie, Zhi-Hong Deng.
[paper] [code]

(AAAI2023_Mix) Token Mixing: Parameter-Efficient Transfer Learning from Image-Language to Video-Language.
Yuqi Liu, Luhui Xu, Pengfei Xiong, Qin Jin.
[paper] [code]

(arXiv2022_LAVISH) Vision Transformers are Parameter-Efficient Audio-Visual Learners.
Yan-Bo Lin, Yi-Lin Sung, Jie Lei, Mohit Bansal, Gedas Bertasius.
[paper] [code]

(arXiv2023_KronA) KronA: Parameter Efficient Tuning with Kronecker Adapter.
Ali Edalati, Marzieh Tahaei, Ivan Kobyzev, Vahid Partovi Nia, James J. Clark, Mehdi Rezagholizadeh.
[paper]

(CVPR2024_MV-Adapter) MV-Adapter: Exploring Parameter Efficient Learning for Video Text Retrieval.
Bowen Zhang, Xiaojie Jin, Weibo Gong, Kai Xu, Xueqing Deng, Peng Wang, Zhao Zhang, Xiaohui Shen, Jiashi Feng.
[paper]

(ICLR2023_AIM) AIM: Adapting Image Models for Efficient Video Action Recognition.
Taojiannan Yang, Yi Zhu, Yusheng Xie, Aston Zhang, Chen Chen, Mu Li.
[paper] [code]

(arXiv2023_OT) Offsite-Tuning: Transfer Learning without Full Model.
Guangxuan Xiao, Ji Lin, Song Han.
[paper] [code]

(arXiv2023_UniAdapter) UniAdapter: Unified Parameter-Efficient Transfer Learning for Cross-modal Modeling.
Haoyu Lu, Mingyu Ding, Yuqi Huo, Guoxing Yang, Zhiwu Lu, Masayoshi Tomizuka, Wei Zhan.
[paper] [code]

(arXiv2023_RepAdapter) Towards Efficient Visual Adaption via Structural Re-parameterization.
Gen Luo, Minglang Huang, Yiyi Zhou, Xiaoshuai Sun, Guannan Jiang, Zhiyu Wang, Rongrong Ji.
[paper] [code]

(arXiv2023_T2I-Adapter) T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models.
Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie.
[paper] [code]

(CVPR2023_MixPHM) MixPHM: Redundancy-Aware Parameter-Efficient Tuning for Low-Resource Visual Question Answering.
Jingjing Jiang, Nanning Zheng.
[paper] [code]

(arXiv2023_TTC-Tuning) Revisit Parameter-Efficient Transfer Learning: A Two-Stage Paradigm.
Hengyuan Zhao, Hao Luo, Yuyang Zhao, Pichao Wang, Fan Wang, Mike Zheng Shou.
[paper]

(ICCV2023_LAE) A Unified Continual Learning Framework with General Parameter-Efficient Tuning.
Qiankun Gao, Chen Zhao, Yifan Sun, Teng Xi, Gang Zhang, Bernard Ghanem, Jian Zhang.
[paper] [code]

(ICLR2023_AdaLoRA) AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning.
Qingru Zhang, Minshuo Chen, Alexander Bukharin, Nikos Karampatziakis, Pengcheng He, Yu Cheng, Weizhu Chen, Tuo Zhao.
[paper] [code]

(ECIR2023_Adapter-SPLADE) Parameter-Efficient Sparse Retrievers and Rerankers using Adapters.
Vaishali Pal, Carlos Lassance, Hervé Déjean, Stéphane Clinchant.
[paper] [code]

(arXiv2023_Unet-Finetune) A Closer Look at Parameter-Efficient Tuning in Diffusion Models.
Chendong Xiang, Fan Bao, Chongxuan Li, Hang Su, Jun Zhu.
[paper] [code]

(EMNLP2023_LLM-Adapters) LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models.
Zhiqiang Hu, Lei Wang, Yihuai Lan, Wanyu Xu, Ee-Peng Lim, Lidong Bing, Xing Xu, Soujanya Poria, Roy Ka-Wei Lee.
[paper] [code]

(NeurIPS2023_CoDA) Conditional Adapters: Parameter-efficient Transfer Learning with Fast Inference.
Tao Lei, Junwen Bai, Siddhartha Brahma, Joshua Ainslie, Kenton Lee, Yanqi Zhou, Nan Du, Vincent Y. Zhao, Yuexin Wu, Bo Li, Yu Zhang, Ming-Wei Chang.
[paper]

(ICLR2023_Robo-Adapter) Lossless Adaptation of Pretrained Vision Models For Robotic Manipulation.
Mohit Sharma, Claudio Fantacci, Yuxiang Zhou, Skanda Koppula, Nicolas Heess, Jon Scholz, Yusuf Aytar.
[paper] [code]

(arXiv2023_PVP) PVP: Pre-trained Visual Parameter-Efficient Tuning.
Zhao Song, Ke Yang, Naiyang Guan, Junjie Zhu, Peng Qiao, Qingyong Hu.
[paper]

(NeurIPS2023_Aurora) Parameter-efficient Tuning of Large-scale Multimodal Foundation Model.
Haixin Wang, Xinlong Yang, Jianlong Chang, Dian Jin, Jinan Sun, Shikun Zhang, Xiao Luo, Qi Tian.
[paper] [code]

(NeurIPS2023_QLoRA) QLoRA: Efficient Finetuning of Quantized LLMs.
Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, Luke Zettlemoyer.
[paper] [code]

(CVPR2023_LoRand) 1% VS 100%: Parameter-Efficient Low Rank Adapter for Dense Predictions.
Dongshuo Yin, Yiran Yang, Zhechao Wang, Hongfeng Yu, Kaiwen Wei, Xian Sun.
[paper]

(arXiv2023_LoRAPrune) LoRAPrune: Pruning Meets Low-Rank Parameter-Efficient Fine-Tuning.
Mingyang Zhang, Hao Chen, Chunhua Shen, Zhen Yang, Linlin Ou, Xinyi Yu, Bohan Zhuang.
[paper]

(arXiv2023_OFT) Controlling Text-to-Image Diffusion by Orthogonal Finetuning.
Zeju Qiu, Weiyang Liu, Haiwen Feng, Yuxuan Xue, Yao Feng, Zhen Liu, Dan Zhang, Adrian Weller, Bernhard Schölkopf.
[paper] [code]

(arXiv2023_GLoRA) One-for-All: Generalized LoRA for Parameter-Efficient Fine-tuning.
Arnav Chavan, Zhuang Liu, Deepak Gupta, Eric Xing, Zhiqiang Shen.
[paper] [code]

(ACMMM2023_VioLET) VioLET: Vision-Language Efficient Tuning with Collaborative Multi-modal Gradients.
Yaoming Wang, Yuchen Liu, Xiaopeng Zhang, Jin Li, Bowen Shi, Chenglin Li, Wenrui Dai, Hongkai Xiong, Qi Tian.
[paper] [code]

(arXiv2023_ReLoRA) ReLoRA: High-Rank Training Through Low-Rank Updates.
Vladislav Lialin, Namrata Shivagunde, Sherin Muckatira, Anna Rumshisky.
[paper] [code]

(ICCV2023_ETRIS) Bridging Vision and Language Encoders: Parameter-Efficient Tuning for Referring Image Segmentation.
Zunnan Xu, Zhihong Chen, Yong Zhang, Yibing Song, Xiang Wan, Guanbin Li.
[paper] [code]

(ICCV2023_BI-LoRA) Revisiting the Parameter Efficiency of Adapters from the Perspective of Precision Redundancy.
Shibo Jie, Haoqing Wang, Zhi-Hong Deng.
[paper] [code]

(arXiv2023_LoRA-FA) LoRA-FA: Memory-efficient Low-rank Adaptation for Large Language Models Fine-tuning.
Longteng Zhang, Lin Zhang, Shaohuai Shi, Xiaowen Chu, Bo Li.
[paper]

(arXiv2023_SLoRA) SLoRA: Federated Parameter Efficient Fine-Tuning of Language Models.
Sara Babakniya, Ahmed Roushdy Elkordy, Yahya H. Ezzeldin, Qingfeng Liu, Kee-Bong Song, Mostafa El-Khamy, Salman Avestimehr.
[paper]

(ICCV2023_Tem-adapter) Tem-adapter: Adapting Image-Text Pretraining for Video Question Answer.
Guangyi Chen, Xiao Liu, Guangrun Wang, Kun Zhang, Philip H.S.Torr, Xiao-Ping Zhang, Yansong Tang.
[paper] [code]

(ICCV2023_VL-PET) VL-PET: Vision-and-Language Parameter-Efficient Tuning via Granularity Control.
Zi-Yuan Hu, Yanyang Li, Michael R. Lyu, Liwei Wang.
[paper] [code]

(ICCV2023_VLN-PETL) VLN-PETL: Parameter-Efficient Transfer Learning for Vision-and-Language Navigation.
Yanyuan Qiao, Zheng Yu, Qi Wu.
[paper] [code]

(arXiv2023_PE-RSITR) Parameter-Efficient Transfer Learning for Remote Sensing Image-Text Retrieval.
Yuan Yuan, Yang Zhan, Zhitong Xiong.
[paper] [code]

(AAAI2024_SAM-PARSER) SAM-PARSER: Fine-tuning SAM Efficiently by Parameter Space Reconstruction.
Zelin Peng, Zhengqin Xu, Zhilin Zeng, Xiaokang Yang, Wei Shen.
[paper]

(NeurIPS2023_DAS) Parameter and Computation Efficient Transfer Learning for Vision-Language Pre-trained Models.
Qiong Wu, Wei Yu, Yiyi Zhou, Shubin Huang, Xiaoshuai Sun, Rongrong Ji.
[paper] [code]

(arXiv2023_Hydra) Hydra: Multi-head Low-rank Adaptation for Parameter Efficient Fine-tuning.
Sanghyeon Kim, Hyunmo Yang, Younghyun Kim, Youngjoon Hong, Eunbyung Park.
[paper] [code]

(IJCV2023_SCT) SCT: A Simple Baseline for Parameter-Efficient Fine-Tuning via Salient Channels.
Henry Hengyuan Zhao, Pichao Wang, Yuyang Zhao, Hao Luo, Fan Wang, Mike Zheng Shou.
[paper] [code]

(ICLR2024_LongLoRA) LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models.
Yukang Chen, Shengju Qian, Haotian Tang, Xin Lai, Zhijian Liu, Song Han, Jiaya Jia.
[paper] [code]

(arXiv2023_QA-LoRA) QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models.
Yuhui Xu, Lingxi Xie, Xiaotao Gu, Xin Chen, Heng Chang, Hengheng Zhang, Zhengsu Chen, Xiaopeng Zhang, Qi Tian.
[paper] [code]

(ICLR2024_LyCORIS) Navigating Text-To-Image Customization: From LyCORIS Fine-Tuning to Model Evaluation.
Shih-Ying Yeh, Yu-Guan Hsieh, Zhidong Gao, Bernard B W Yang, Giyeong Oh, Yanmin Gong.
[paper] [code]

(ICLR2024_NOLA) NOLA: Networks as Linear Combination of Low Rank Random Basis.
Soroush Abbasi Koohpayegani, KL Navaneet, Parsa Nooralinejad, Soheil Kolouri, Hamed Pirsiavash.
[paper] [code]

(ICLR2024_VeRA) VeRA: Vector-based Random Matrix Adaptation.
Dawid J. Kopiczko, Tijmen Blankevoort, Yuki M. Asano.
[paper] [code]

(ICLR2024_LoRA-rank) The Expressive Power of Low-Rank Adaptation.
Yuchen Zeng, Kangwook Lee.
[paper]

(EMNLP2023_Adapters) Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning.
Clifton Poth, Hannah Sterz, Indraneil Paul, Sukannya Purkayastha, Leon Engländer, Timo Imhof, Ivan Vulić, Sebastian Ruder, Iryna Gurevych, Jonas Pfeiffer.
[paper] [code]

(arXiv2023_MultiLoRA) MultiLoRA: Democratizing LoRA for Better Multi-Task Learning.
Yiming Wang, Yu Lin, Xiaodong Zeng, Guannan Zhang.
[paper]

(EMNLP2023_SoRA) Sparse Low-rank Adaptation of Pre-trained Language Models.
Ning Ding, Xingtai Lv, Qiaosen Wang, Yulin Chen, Bowen Zhou, Zhiyuan Liu, Maosong Sun.
[paper] [code]

(CVPR2024_SAM-COBOT) Parameter Efficient Fine-tuning via Cross Block Orchestration for Segment Anything Model.
Zelin Peng, Zhengqin Xu, Zhilin Zeng, Lingxi Xie, Qi Tian, Wei Shen.
[paper]

(NeurIPS2023_CAST) CAST: Cross-Attention in Space and Time for Video Action Recognition.
Dongho Lee, Jongseo Lee, Jinwoo Choi.
[paper] [code]

(AAAI2024_VMT-Adapter) VMT-Adapter: Parameter-Efficient Transfer Learning for Multi-Task Dense Scene Understanding.
Yi Xin, Junlong Du, Qiang Wang, Zhiwen Lin, Ke Yan.
[paper] [code]

(arXiv2023_AdaptIR) AdaptIR: Parameter Efficient Multi-task Adaptation for Pre-trained Image Restoration Models.
Hang Guo, Tao Dai, Yuanchao Bai, Bin Chen, Shu-Tao Xia, Zexuan Zhu.
[paper] [code]

(arXiv2023_I2V-Adapter) I2V-Adapter: A General Image-to-Video Adapter for Diffusion Models.
Xun Guo, Mingwu Zheng, Liang Hou, Yuan Gao, Yufan Deng, Pengfei Wan, Di Zhang, Yufan Liu, Weiming Hu, Zhengjun Zha, Haibin Huang, Chongyang Ma.
[paper]

(arXiv2024_RoSA) RoSA: Accurate Parameter-Efficient Fine-Tuning via Robust Adaptation.
Mahdi Nikdan, Soroush Tabesh, Elvir Crnčević, Dan Alistarh.
[paper] [code]

(CVPR2024_ModaVerse) ModaVerse: Efficiently Transforming Modalities with LLMs.
Xinyu Wang, Bohan Zhuang, Qi Wu.
[paper]

(AAAI2024_DGL) DGL: Dynamic Global-Local Prompt Tuning for Text-Video Retrieval.
Xiangpeng Yang, Linchao Zhu, Xiaohan Wang, Yi Yang.
[paper] [code]

(arXiv2024_LoTR) LoTR: Low Tensor Rank Weight Adaptation.
Daniel Bershatsky, Daria Cherniuk, Talgat Daulbaev, Aleksandr Mikhalev, Ivan Oseledets.
[paper]

(ICML2024_DoRA) DoRA: Weight-Decomposed Low-Rank Adaptation.
Shih-Yang Liu, Chien-Yi Wang, Hongxu Yin, Pavlo Molchanov, Yu-Chiang Frank Wang, Kwang-Ting Cheng, Min-Hung Chen.
[paper] [code]

(arXiv2024_LoRA+) LoRA+: Efficient Low Rank Adaptation of Large Models.
Soufiane Hayou, Nikhil Ghosh, Bin Yu.
[paper] [code]

(arXiv2024_DiffuseKronA) DiffuseKronA: A Parameter Efficient Fine-tuning Method for Personalized Diffusion Models.
Shyam Marjit, Harshit Singh, Nityanand Mathur, Sayak Paul, Chia-Mu Yu, Pin-Yu Chen.
[paper] [code]

(arXiv2024_Filter-Atoms) Large Convolutional Model Tuning via Filter Subspace.
Wei Chen, Zichen Miao, Qiang Qiu.
[paper]

(CVPR2024_DAPT) Dynamic Adapter Meets Prompt Tuning: Parameter-Efficient Transfer Learning for Point Cloud Analysis.
Xin Zhou, Dingkang Liang, Wei Xu, Xingkui Zhu, Yihan Xu, Zhikang Zou, Xiang Bai.
[paper] [code]

(arXiv2024_GaLore) GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection.
Jiawei Zhao, Zhenyu Zhang, Beidi Chen, Zhangyang Wang, Anima Anandkumar, Yuandong Tian.
[paper] [code]

(arXiv2024_Routing) Introducing Routing Functions to Vision-Language Parameter-Efficient Fine-Tuning with Low-Rank Bottlenecks.
Tingyu Qu, Tinne Tuytelaars, Marie-Francine Moens.
[paper]

(CVPR2024_MoE-Adapters4CL) Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters.
Jiazuo Yu, Yunzhi Zhuge, Lu Zhang, Ping Hu, Dong Wang, Huchuan Lu, You He.
[paper] [code]

(arXiv2024_SuperLoRA) SuperLoRA: Parameter-Efficient Unified Adaptation of Multi-Layer Attention Modules.
Xiangyu Chen, Jing Liu, Ye Wang, Pu Perry Wang, Matthew Brand, Guanghui Wang, Toshiaki Koike-Akino.
[paper]

(arXiv2024_LISA) LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning.
Rui Pan, Xiang Liu, Shizhe Diao, Renjie Pi, Jipeng Zhang, Chi Han, Tong Zhang.
[paper] [code]

(arXiv2024_PiSSA) PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models.
Fanxu Meng, Zhaohui Wang, Muhan Zhang.
[paper] [code]

(ICML2024_qGOFT) Parameter Efficient Quasi-Orthogonal Fine-Tuning via Givens Rotation.
Xinyu Ma, Xu Chu, Zhibang Yang, Yang Lin, Xin Gao, Junfeng Zhao.
[paper]

(arXiv2024_MoMA) MoMA: Multimodal LLM Adapter for Fast Personalized Image Generation.
Kunpeng Song, Yizhe Zhu, Bingchen Liu, Qing Yan, Ahmed Elgammal, Xiao Yang.
[paper] [code]

(ICML2024_DCRIS) Enhancing Fine-grained Multi-modal Alignment via Adapters: A Parameter-Efficient Training Framework for Referring Image Segmentation.
Zunnan Xu, Jiaqi Huang, Ting Liu, Yong Liu, Haonan Han, Kehong Yuan, Xiu Li.
[paper] [code]

(arXiv2024_Spectral-Adapter) Spectral Adapter: Fine-Tuning in Spectral Space.
Fangzhao Zhang, Mert Pilanci.
[paper]

(ICME2024_DARA) DARA: Domain- and Relation-aware Adapters Make Parameter-efficient Tuning for Visual Grounding.
Ting Liu, Xuyang Liu, Siteng Huang, Honggang Chen, Quanjun Yin, Long Qin, Donglin Wang, Yue Hu.
[paper] [code]

(arXiv2024_TriLoRA) TriLoRA: Integrating SVD for Advanced Style Personalization in Text-to-Image Generation.
Chengcheng Feng, Mu He, Qiuyu Tian, Haojie Yin, Xiaofang Zhao, Hongwei Tang, Xingqiang Wei.
[paper]

(arXiv2024_Sparse-Tuning) Sparse-Tuning: Adapting Vision Transformers with Efficient Fine-tuning and Inference.
Ting Liu, Xuyang Liu, Liangtao Shi, Zunnan Xu, Siteng Huang, Yi Xin, Quanjun Yin.
[paper] [code]

(arXiv2024_FLoRA) FLoRA: Low-Rank Core Space for N-dimension.
Chongjie Si, Xuehui Wang, Xue Yang, Zhengqin Xu, Qingyun Li, Jifeng Dai, Yu Qiao, Xiaokang Yang, Wei Shen.
[paper] [code]

(arXiv2024_MLAE) MLAE: Masked LoRA Experts for Parameter-Efficient Fine-Tuning.
Junjie Wang, Guangjing Yang, Wentao Chen, Huahui Yi, Xiaohu Wu, Qicheng Lao.
[paper] [code]

(arXiv2024_ADAPTER-X) ADAPTER-X: A Novel General Parameter-Efficient Fine-Tuning Framework for Vision.
Minglei Li, Peng Ye, Yongqi Huang, Lin Zhang, Tao Chen, Tong He, Jiayuan Fan, Wanli Ouyang.
[paper]

(arXiv2024_LoRA-Init) The Impact of Initialization on LoRA Finetuning Dynamics.
Soufiane Hayou, Nikhil Ghosh, Bin Yu.
[paper]

(arXiv2024_MiLoRA) MiLoRA: Harnessing Minor Singular Components for Parameter-Efficient LLM Finetuning.
Hanqing Wang, Zeguan Xiao, Yixia Li, Shuo Wang, Guanhua Chen, Yun Chen.
[paper]

(arXiv2024_LoRA-GA) LoRA-GA: Low-Rank Adaptation with Gradient Approximation.
Shaowen Wang, Linxi Yu, Jian Li.
[paper] [code]

(arXiv2024_Q-GaLore) Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.
Zhenyu Zhang, Ajay Jaiswal, Lu Yin, Shiwei Liu, Jiawei Zhao, Yuandong Tian, Zhangyang Wang.
[paper] [code]

(arXiv2024_LoRA-Pro) LoRA-Pro: Are Low-Rank Adapters Properly Optimized?.
Zhengbo Wang, Jian Liang.
[paper] [code]

(arXiv2024_LoRA-Dash) Unleashing the Power of Task-Specific Directions in Parameter Efficient Fine-tuning.
Chongjie Si, Zhiyi Shi, Shifan Zhang, Xiaokang Yang, Hanspeter Pfister, Wei Shen.
[paper] [code]

*Partially Tuning*

(ICML2021_CLIP) Learning Transferable Visual Models From Natural Language Supervision.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
[paper] [code]

(ACL2021_DiffPruning) Parameter-Efficient Transfer Learning with Diff Pruning.
Demi Guo, Alexander M. Rush, Yoon Kim.
[paper] [code]

(ACL2022_BitFit) BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models.
Elad Ben Zaken, Shauli Ravfogel, Yoav Goldberg.
[paper]

(arXiv2022_LayerNorm-tuning) How to Adapt Your Large-Scale Vision-and-Language Model for Downstream Image Classification.
Konwoo Kim, Michael Laskin, Igor Mordatch, Deepak Pathak.
[paper] [code]

(IJCV2024_CLIP-Adapter) CLIP-Adapter: Better Vision-Language Models with Feature Adapters.
Peng Gao, Shijie Geng, Renrui Zhang, Teli Ma, Rongyao Fang, Yongfeng Zhang, Hongsheng Li, Yu Qiao.
[paper] [code]

(NeurIPS2021_FISH-Mask) Training Neural Networks with Fixed Sparse Masks.
Yi-Lin Sung, Varun Nair, Colin Raffel.
[paper] [code]

(ICML2022_Head2Toe) Head2Toe: Utilizing Intermediate Representations for Better Transfer Learning.
Utku Evci, Vincent Dumoulin, Hugo Larochelle, Michael C. Mozer.
[paper] [code]

(NAACL2022_AdapterBias) AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks.
Chin-Lun Fu, Zih-Ching Chen, Yun-Ru Lee, Hung-yi Lee.
[paper] [code]

(CVPR2023_SoLa) Soft-Landing Strategy for Alleviating the Task Discrepancy Problem in Temporal Action Localization Tasks.
Hyolim Kang, Hanjung Kim, Joungbin An, Minsu Cho, Seon Joo Kim.
[paper]

(NeurIPS2023_InCA) Your representations are in the network: composable and parallel adaptation for large scale models.
Yonatan Dukler, Alessandro Achille, Hao Yang, Varsha Vivek, Luca Zancato, Benjamin Bowman, Avinash Ravichandran, Charless Fowlkes, Ashwin Swaminathan, Stefano Soatto.
[paper]

(ICCV2023_SPT) Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning.
Haoyu He, Jianfei Cai, Jing Zhang, Dacheng Tao, Bohan Zhuang.
[paper] [code]

(arXiv2023_LN-TUNE) Strong Baselines for Parameter Efficient Few-Shot Fine-tuning.
Samyadeep Basu, Daniela Massiceti, Shell Xu Hu, Soheil Feizi.
[paper]

(ICCV2023_DiffFit) DiffFit: Unlocking Transferability of Large Diffusion Models via Simple Parameter-Efficient Fine-Tuning.
Enze Xie, Lewei Yao, Han Shi, Zhili Liu, Daquan Zhou, Zhaoqiang Liu, Jiawei Li, Zhenguo Li.
[paper] [code]

(arXiv2023_ECoFLaP) ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models.
Yi-Lin Sung, Jaehong Yoon, Mohit Bansal.
[paper] [code]

(CVPR2024_PELA) PELA: Learning Parameter-Efficient Models with Low-Rank Approximation.
Yangyang Guo, Guangzhi Wang, Mohan Kankanhalli.
[paper] [code]

(arXiv2023_EFFT) Aggregate, Decompose, and Fine-Tune: A Simple Yet Effective Factor-Tuning Method for Vision Transformer.
Dongping Chen.
[paper] [code]

(CVPR2024_GPS) Gradient-based Parameter Selection for Efficient Fine-Tuning.
Zhi Zhang, Qizhe Zhang, Zijun Gao, Renrui Zhang, Ekaterina Shutova, Shiji Zhou, Shanghang Zhang.
[paper]

(arXiv2024_ID3) Step-by-Step Unmasking for Parameter-Efficient Fine-tuning of Large Language Models.
Aradhye Agarwal, Suhas K Ramesh, Ayan Sengupta, Tanmoy Chakraborty.
[paper] [code]

*Side Tuning*

(ECCV2020_Side-Tuning) Side-Tuning: A Baseline for Network Adaptation via Additive Side Networks.
Jeffrey O Zhang, Alexander Sax, Amir Zamir, Leonidas Guibas, Jitendra Malik.
[paper] [code]

(arXiv2021_BD-ViT) Benchmarking Detection Transfer Learning with Vision Transformers.
Yanghao Li, Saining Xie, Xinlei Chen, Piotr Dollar, Kaiming He, Ross Girshick.
[paper] [code]

(FCS2024_Y-Tuning) Y-Tuning: An Efficient Tuning Paradigm for Large-Scale Pre-Trained Models via Label Representation Learning.
Yitao Liu, Chenxin An, Xipeng Qiu.
[paper]

(NeurIPS2022_LST) LST: Ladder Side-Tuning for Parameter and Memory Efficient Transfer Learning.
Yi-Lin Sung, Jaemin Cho, Mohit Bansal.
[paper] [code]

(CVPR2023_VQT) Visual Query Tuning: Towards Effective Usage of Intermediate Representations for Parameter and Memory Efficient Transfer Learning.
Cheng-Hao Tu, Zheda Mai, Wei-Lun Chao.
[paper] [code]

(CVPR2023_SAN) Side Adapter Network for Open-Vocabulary Semantic Segmentation.
Mengde Xu, Zheng Zhang, Fangyun Wei, Han Hu, Xiang Bai.
[paper] [code]

(arXiv2023_E3VA) Parameter-efficient is not sufficient: Exploring Parameter, Memory, and Time Efficient Adapter Tuning for Dense Predictions.
Dongshuo Yin, Xueting Han, Bin Li, Hao Feng, Jing Bai.
[paper]

(arXiv2023_SAM-LST) Ladder Fine-tuning approach for SAM integrating complementary network.
Shurong Chai, Rahul Kumar Jain, Shiyu Teng, Jiaqing Liu, Yinhao Li, Tomoko Tateyama, Yen-wei Chen.
[paper] [code]

(ICCV2023_DiST) Disentangling Spatial and Temporal Learning for Efficient Image-to-Video Transfer Learning.
Zhiwu Qing, Shiwei Zhang, Ziyuan Huang, Yingya Zhang, Changxin Gao, Deli Zhao, Nong Sang.
[paper] [code]

(arXiv2023_HST) Hierarchical Side-Tuning for Vision Transformers.
Weifeng Lin, Ziheng Wu, Jiayu Chen, Wentao Yang, Mingxin Huang, Jun Huang, Lianwen Jin.
[paper] [code]

(NeurIPS2023_Res-Tuning) Res-Tuning: A Flexible and Efficient Tuning Paradigm via Unbinding Tuner from Backbone.
Zeyinzi Jiang, Chaojie Mao, Ziyuan Huang, Ao Ma, Yiliang Lv, Yujun Shen, Deli Zhao, Jingren Zhou.
[paper] [code]

(arXiv2023_Side4Video) Side4Video: Spatial-Temporal Side Network for Memory-Efficient Image-to-Video Transfer Learning.
Huanjin Yao, Wenhao Wu, Zhiheng Li.
[paper] [code]

(CVPR2024_AdaTAD) End-to-End Temporal Action Detection with 1B Parameters Across 1000 Frames.
Shuming Liu, Chen-Lin Zhang, Chen Zhao, Bernard Ghanem.
[paper] [code]

(AAAI2024_DTL) DTL: Disentangled Transfer Learning for Visual Recognition.
Minghao Fu, Ke Zhu, Jianxin Wu.
[paper] [code]

(arXiv2024_Proxy-Tuning) Tuning Language Models by Proxy.
Alisa Liu, Xiaochuang Han, Yizhong Wang, Yulia Tsvetkov, Yejin Choi, Noah A. Smith.
[paper]

(ICLR2024_BarLeRIa) BarLeRIa: An Efficient Tuning Framework for Referring Image Segmentation.
Yaoming Wang, Jin Li, Xiaopeng Zhang, Bowen Shi, Chenglin Li, Wenrui Dai, Hongkai Xiong, Qi Tian.
[paper] [code]

(CVPR2024_LoSA) Time-, Memory- and Parameter-Efficient Visual Adaptation.
Otniel-Bogdan Mercea, Alexey Gritsenko, Cordelia Schmid, Anurag Arnab.
[paper]

(arXiv2024_LAST) Low-rank Attention Side-Tuning for Parameter-Efficient Fine-Tuning.
Ningyuan Tang, Minghao Fu, Ke Zhu, Jianxin Wu.
[paper]

(arXiv2024_R2-Tuning) R2-Tuning: Efficient Image-to-Video Transfer Learning for Video Temporal Grounding.
Ye Liu, Jixuan He, Wanhua Li, Junsik Kim, Donglai Wei, Hanspeter Pfister, Chang Wen Chen.
[paper] [code]

(arXiv2024_LoSA) LoSA: Long-Short-range Adapter for Scaling End-to-End Temporal Action Localization.
Akshita Gupta, Gaurav Mittal, Ahmed Magooda, Ye Yu, Graham W. Taylor, Mei Chen.
[paper]

(CVPR2024_UniPT) UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and Memory.
Haiwen Diao, Bo Wan, Ying Zhang, Xu Jia, Huchuan Lu, Long Chen.
[paper] [code]

(arXiv2024_M2IST) M2IST: Multi-Modal Interactive Side-Tuning for Memory-efficient Referring Expression Comprehension.
Xuyang Liu, Ting Liu, Siteng Huang, Yue Hu, Quanjun Yin, Donglin Wang, Honggang Chen.
[paper]

(ECCV2024_SynQT) Parameter-Efficient and Memory-Efficient Tuning for Vision Transformer: A Disentangled Approach.
Taolin Zhang, Jiawang Bai, Zhihe Lu, Dongze Lian, Genping Wang, Xinchao Wang, Shu-Tao Xia.
[paper]

(ECCV2024_SHERL) SHERL: Synthesizing High Accuracy and Efficient Memory for Resource-Limited Transfer Learning.
Haiwen Diao, Bo Wan, Xu Jia, Yunzhi Zhuge, Ying Zhang, Huchuan Lu, Long Chen.
[paper] [code]

*Unified Tuning*

(ICLR2022_UnifiedPET) Towards a Unified View of Parameter-Efficient Transfer Learning.
Junxian He, Chunting Zhou, Xuezhe Ma, Taylor Berg-Kirkpatrick, Graham Neubig.
[paper] [code]

(ACL2022_UniPELT) UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning.
Yuning Mao, Lambert Mathias, Rui Hou, Amjad Almahairi, Hao Ma, Jiawei Han, Wen-tau Yih, Madian Khabsa.
[paper] [code]

(arXiv2022_NOAH) Neural Prompt Search.
Yuanhan Zhang, Kaiyang Zhou, Ziwei Liu.
[paper] [code]

(arXiv2023_V-PETL) Towards a Unified View on Visual Parameter-Efficient Transfer Learning.
Bruce X.B. Yu, Jianlong Chang, Lingbo Liu, Qi Tian, Chang Wen Chen.
[paper] [code]

(arXiv2023_PETL-DS) Parameter-Efficient Fine-Tuning Design Spaces.
Jiaao Chen, Aston Zhang, Xingjian Shi, Mu Li, Alex Smola, Diyi Yang.
[paper] [code]

(arXiv2023_AutoPEFT) AutoPEFT: Automatic Configuration Search for Parameter-Efficient Fine-Tuning.
Han Zhou, Xingchen Wan, Ivan Vulić, Anna Korhonen.
[paper] [code]

(arXiv2023_U-Tuning) Rethinking Efficient Tuning Methods from a Unified Perspective.
Zeyinzi Jiang, Chaojie Mao, Ziyuan Huang, Yiliang Lv, Deli Zhao, Jingren Zhou.
[paper]

(arXiv2023_GIST) GIST: Improving Parameter Efficient Fine Tuning via Knowledge Interaction.
Jiacheng Ruan, Jingsheng Gao, Mingye Xie, Suncheng Xiang, Zefang Yu, Ting Liu, Yuzhuo Fu.
[paper] [code]

(arXiv2024_Subspace-Tuning) See Further for Parameter Efficient Fine-tuning by Standing on the Shoulders of Decomposition.
Chongjie Si, Xiaokang Yang, Wei Shen.
[paper] [code]

*Posted in*

(IEEE2020_Survey) A Comprehensive Survey on Transfer Learning.
Fuzhen Zhuang, Zhiyuan Qi, Keyu Duan, Dongbo Xi, Yongchun Zhu, Hengshu Zhu, Hui Xiong, Qing He.
[paper]

(ACL2021_Intrinsic-SAID) Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning.
Armen Aghajanyan, Luke Zettlemoyer, Sonal Gupta.
[paper] [code]

(AAAI2023_MobileTL) MobileTL: On-device Transfer Learning with Inverted Residual Blocks.
Hung-Yueh Chiang, Natalia Frumkin, Feng Liang, Diana Marculescu.
[paper]

(arXiv2023_G-BAIR) Gradient-Based Automated Iterative Recovery for Parameter-Efficient Tuning.
Maximilian Mozes, Tolga Bolukbasi, Ann Yuan, Frederick Liu, Nithum Thain, Lucas Dixon.
[paper]

(EMNLP2023_VL-merging) An Empirical Study of Multimodal Model Merging.
Yi-Lin Sung, Linjie Li, Kevin Lin, Zhe Gan, Mohit Bansal, Lijuan Wang.
[paper] [code]

(NeurIPS2023_MEFT) Make Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning.
Baohao Liao, Shaomu Tan, Christof Monz.
[paper] [code]

(arXiv2023_LOMO) Full Parameter Fine-tuning for Large Language Models with Limited Resources.
Kai Lv, Yuqing Yang, Tengxiao Liu, Qinghui Gao, Qipeng Guo, Xipeng Qiu.
[paper] [code]

(CVPR2024_Dr2Net) Dr2Net: Dynamic Reversible Dual-Residual Networks for Memory-Efficient Finetuning.
Chen Zhao, Shuming Liu, Karttikeya Mangalam, Guocheng Qian, Fatimah Zohra, Abdulmohsen Alghannam, Jitendra Malik, Bernard Ghanem.
[paper]

(arXiv2024_Survey) Parameter-Efficient Fine-Tuning for Pre-Trained Vision Models: A Survey.
Yi Xin, Siqi Luo, Haodi Zhou, Junlong Du, Xiaohong Liu, Yue Fan, Qing Li, Yuntao Du.
[paper] [code]

(arXiv2024_OSD) Memory-Efficient LLM Training with Online Subspace Descent.
Kaizhao Liang, Bo Liu, Lizhang Chen, Qiang Liu.
[paper] [code]