Skip to content

mohammad-gh009/Large-Language-Models-vs-Classical-Machine-learning

Repository files navigation

Here's the corrected and improved version of the markdown:

Large Language Models vs. Classical Machine Learning

Project Aim

To compare the performance of Large Language Models (LLMs) and Classical Machine Learning (CML) approaches in predicting COVID-19 mortality using a high-dimensional structured dataset.

Project Team

Mohammadreza Ghaffarzadeh-Esfahani, Mahdi Ghaffarzadeh-Esfahani, Arian Salahi-Niri, Hossein Toreyhi, Zahra Atf, Amirali Mohsenzadeh-Kermani, Mahshad Sarikhani, Zohreh Tajabadi, Fatemeh Shojaeian, Mohammad Hassan Bagheri, Aydin Feyzi, Mohammadamin Tarighatpayma, Narges Gazmeh, Fateme Heydari, Hossein Afshar, Amirreza Allahgholipour, Farid Alimardani, Ameneh Salehi, Naghmeh Asadimanesh, Mohammad Amin Khalafi, Hadis Shabanipour, Ali Moradi, Sajjad Hossein Zadeh, Omid Yazdani, Romina Esbati, Moozhan Maleki, Danial Samiei Nasr, Amirali Soheili, Hossein Majlesi, Saba Shahsavan, Alireza Soheilipour, Nooshin Goudarzi, Erfan Taherifard, Hamidreza Hatamabadi, Jamil S Samaan, Thomas Savage, Ankit Sakhuja, Ali Soroush, Girish Nadkarni, Ilad Alavi Darazam, Mohamad Amin Pourhoseingholi, Seyed Amir Ahmad Safavi-Naini

Code

This repository contains code for:

  1. Training Classical Machine Learning models
  2. Generating LLM answers for zero-shot prediction
  3. Fine-tuning Mistral-7b
  4. Generating pre-trained language model answers for zero-shot prediction
  5. Analyzing performance

For data curation, cleaning, and laboratory extraction, as well as additional project information, please refer to the Tehran-COVID-Cohort repository.

Contact Us

To request access to the data, please email SAA Safavi-Naini ([email protected]) using your institutional email address.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •