-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sending input to model #40
base: main
Are you sure you want to change the base?
sending input to model #40
Conversation
return () => { | ||
window.removeEventListener('popstate', handlePopState); | ||
}; | ||
}, []); | ||
|
||
useEffect(() => { | ||
messagesEndRef.current?.scrollIntoView({ behavior: "smooth" }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a maximum limit on the number of messages that can be viewed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not believe so.
}; | ||
|
||
const handleKeyDown = (event) => { | ||
if (event.key === 'Enter') { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The messages are saving properly in the chat which is a good sign.
}; | ||
|
||
const handleCancelLogout = () => { | ||
setOpenDialog(false); | ||
window.history.pushState(null, document.title, window.location.href); | ||
}; | ||
|
||
const handleSend = async () => { | ||
const trimmedMessage = inputMessage.trim(); | ||
if (!trimmedMessage) return; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would this be better handled in the backend when sending to the model?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think so, only because we don't want to be having the model work with empty text. So, we check in the front end to make sure we aren't sending empty strings to the model. This is just part of that logic.
@app.route('/predict', methods=['POST']) | ||
def predict(): | ||
data = request.json | ||
input_text = data['input'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could validate the input to ensure it's a non-empty string before sending it to the model.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have already made this check in HomeScreen.js. I don't handle it here because we want to ensure the string isn't empty before we make a request to the model.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-
Since we need to run python model.py before running npm start, I recommend we include to run this command in the package.json in the concurrently component
-
Can you give an option in the end of the text box to expand the input box if required
Fixes #37
What was changed?
I modified the home screen to allow the users to interactively communicate with the AI Agent. User's are now able to send messages to the agent and maintain a continuous stream of communication. The user's messages are colored in blue where as the AI responses are colored in light grey. The user is also able to use the scroll component to view past message history with the AI agent. While the user is unable to get a response from the agent at this time, if their message was successfully sent, they are shown a response of "Successful input". An additional server was created to host the model and allow user communication with the model.
Why was it changed?
Previously, the user was unable to communicate with the model; their messages were not being sent anywhere. Now, the user is able to send messages to the model, and know if their message was successfully sent (The response from the model will be communicated in later stages). The user is able to visualize their communication with the AI agent through the chat like appearance of the modified home screen UI. In addition, the user is able to monitor their chat history with the ai agent through the scroll feature.
How was it changed?
I added a python file called model.py in the root directory of the project. This file hosts our model on post 5001 using the Flask python library. I imported the Salesforce codet5-base dummy model that we are using and created a "/predict" API endpoint that allows us to send user input to the model. The associated code is depicted below:
In addition, I made a few UI modifications to mimic the conversation between a human and an AI agent. In the HomeScreen.js file, I added three functions:
I also updated the HomeScreen.css file to support the display of user input and model responses as a chat, where the two streams of communication are depicted in different colors. I also added a css feature for ai-output as "overflow-y: auto" to allow for autoscrolling on the document.
Screenshots that show the changes (if applicable):