This project aims to predict customer churn using an Artificial Neural Network (ANN) model. The model is built using TensorFlow and Keras, and the application is deployed using Streamlit for an interactive user interface.
- Project Overview
- Installation
- Usage
- Model Training
- Prediction
- File Descriptions
- Dependencies
- Acknowledgements
Customer churn prediction is crucial for businesses to retain their customers. This project uses an ANN model to predict whether a customer is likely to churn based on various features such as credit score, geography, gender, age, balance, and more.
To run this project, you need to have Python installed on your machine. Follow the steps below to set up the project:
-
Clone the repository:
git clone https://github.com/sanskaryo/Churn-Prediction-Using_ANN.git cd Churn-Prediction-Using_ANN
-
Install the required dependencies:
pip install -r requirements.txt
-
Run the Streamlit application:
streamlit run app.py
This will start a local Streamlit server where you can interact with the application through a web interface.
After starting the Streamlit app, enter customer details (such as credit score, age, geography, etc.) in the input fields to predict whether the customer is likely to churn. The model processes the input and provides a real-time prediction of churn likelihood.
The model was trained and evaluated in experiments.ipynb
, a Jupyter notebook that includes the following steps:
- Data Preprocessing: Scaling numeric features, encoding categorical data (gender and geography), and preparing the dataset for ANN input.
- Model Building: Defining the ANN structure with input, hidden, and output layers tailored for binary classification.
- Model Training: Training the model on historical customer data, allowing it to learn patterns associated with churn.
- Evaluation: Assessing model performance using metrics like accuracy, precision, recall, and F1-score.
The trained model is saved as model.keras
, which is then loaded in the app.py
file for predictions.
The app.py
file serves as the main interface for making predictions. It performs the following tasks:
- Load the Model and Encoders: The model, label encoders, and scaler are loaded.
- Process Input Data: User inputs are scaled and encoded as needed.
- Generate Prediction: The processed input data is fed into the model to obtain a prediction.
- Display Result: The result ("Churn" or "No Churn") is shown on the app interface.
app.py
: The main Streamlit application for user interaction and predictions.experiments.ipynb
: Jupyter notebook that contains data exploration, preprocessing, model building, training, and evaluation code.model.keras
: The trained ANN model saved in Keras format, used for deployment.label_encoder_gender.pkl
: Pickle file containing the encoder for the gender feature.onehot_encoder_geo.pkl
: Pickle file containing the encoder for the geography feature.scaler.pkl
: Pickle file containing the scaler for numerical features.requirements.txt
: File listing all dependencies required to run the project.
This project requires the following Python libraries:
tensorflow
pandas
numpy
scikit-learn
tensorboard
matplotlib
streamlit
scikeras
Install these dependencies by running:
pip install -r requirements.txt
This project is based on tutorials from Krish Naik's Udemy course. Special thanks to Krish Naik for providing comprehensive guidance on building churn prediction models with machine learning.