Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sending input to model #40

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

PranaviKolouju
Copy link
Collaborator

@PranaviKolouju PranaviKolouju commented Nov 11, 2024

Fixes #37

What was changed?

I modified the home screen to allow the users to interactively communicate with the AI Agent. User's are now able to send messages to the agent and maintain a continuous stream of communication. The user's messages are colored in blue where as the AI responses are colored in light grey. The user is also able to use the scroll component to view past message history with the AI agent. While the user is unable to get a response from the agent at this time, if their message was successfully sent, they are shown a response of "Successful input". An additional server was created to host the model and allow user communication with the model.

Why was it changed?

Previously, the user was unable to communicate with the model; their messages were not being sent anywhere. Now, the user is able to send messages to the model, and know if their message was successfully sent (The response from the model will be communicated in later stages). The user is able to visualize their communication with the AI agent through the chat like appearance of the modified home screen UI. In addition, the user is able to monitor their chat history with the ai agent through the scroll feature.

How was it changed?

I added a python file called model.py in the root directory of the project. This file hosts our model on post 5001 using the Flask python library. I imported the Salesforce codet5-base dummy model that we are using and created a "/predict" API endpoint that allows us to send user input to the model. The associated code is depicted below:

image

In addition, I made a few UI modifications to mimic the conversation between a human and an AI agent. In the HomeScreen.js file, I added three functions:

  1. handleKeyDown(): This function sets up an event listener and if the enter key is pressed on the keyboard, it triggers the handleSend function.
  2. handleSend(): This function checks to make sure that the input is valid (not empty). If it is a valid message, it triggers the sendMessage function. If the message is sent successfully, it sends back "Successful input" (this will change when we actually display the output from the model). Else, it sends back an error message to be displayed in the UI.
  3. sendMessage(): This function sends a POST request to the API endpoint where the model is served. It also checks whether the message was successfully sent or whether there was an error.

image

I also updated the HomeScreen.css file to support the display of user input and model responses as a chat, where the two streams of communication are depicted in different colors. I also added a css feature for ai-output as "overflow-y: auto" to allow for autoscrolling on the document.

Screenshots that show the changes (if applicable):

image

return () => {
window.removeEventListener('popstate', handlePopState);
};
}, []);

useEffect(() => {
messagesEndRef.current?.scrollIntoView({ behavior: "smooth" });
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a maximum limit on the number of messages that can be viewed?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not believe so.

};

const handleKeyDown = (event) => {
if (event.key === 'Enter') {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The messages are saving properly in the chat which is a good sign.

};

const handleCancelLogout = () => {
setOpenDialog(false);
window.history.pushState(null, document.title, window.location.href);
};

const handleSend = async () => {
const trimmedMessage = inputMessage.trim();
if (!trimmedMessage) return;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would this be better handled in the backend when sending to the model?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think so, only because we don't want to be having the model work with empty text. So, we check in the front end to make sure we aren't sending empty strings to the model. This is just part of that logic.

@app.route('/predict', methods=['POST'])
def predict():
data = request.json
input_text = data['input']
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could validate the input to ensure it's a non-empty string before sending it to the model.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have already made this check in HomeScreen.js. I don't handle it here because we want to ensure the string isn't empty before we make a request to the model.

Copy link
Collaborator

@AjithAkuthota23 AjithAkuthota23 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Since we need to run python model.py before running npm start, I recommend we include to run this command in the package.json in the concurrently component

  2. Can you give an option in the end of the text box to expand the input box if required

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Frontend - Capture Input and Display Model Response in Chat-like UI
3 participants