-
Notifications
You must be signed in to change notification settings - Fork 131
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix ml inference ingest processor always return list using JsonPath #2985
Fix ml inference ingest processor always return list using JsonPath #2985
Conversation
Signed-off-by: Mingshi Liu <[email protected]>
@@ -320,24 +312,29 @@ private void getMappedModelInputFromDocuments( | |||
Object documentFieldValue = ingestDocument.getFieldValue(originalFieldPath, Object.class); | |||
String documentFieldValueAsString = toString(documentFieldValue); | |||
updateModelParameters(modelInputFieldName, documentFieldValueAsString, modelParameters); | |||
return; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like a bug , should we also backport this to old versions ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
of course, back-porting back till 2.14
The backport to
To backport manually, run these commands in your terminal: # Fetch latest updates from GitHub
git fetch
# Create a new working tree
git worktree add .worktrees/backport-2.14 2.14
# Navigate to the new working tree
cd .worktrees/backport-2.14
# Create a new branch
git switch --create backport/backport-2985-to-2.14
# Cherry-pick the merged commit of this pull request and resolve the conflicts
git cherry-pick -x --mainline 1 f4b472480390805e0a24de46d9781a665cfb6184
# Push it to GitHub
git push --set-upstream origin backport/backport-2985-to-2.14
# Go back to the original working tree
cd ../..
# Delete the working tree
git worktree remove .worktrees/backport-2.14 Then, create a pull request where the |
The backport to
To backport manually, run these commands in your terminal: # Fetch latest updates from GitHub
git fetch
# Create a new working tree
git worktree add .worktrees/backport-2.15 2.15
# Navigate to the new working tree
cd .worktrees/backport-2.15
# Create a new branch
git switch --create backport/backport-2985-to-2.15
# Cherry-pick the merged commit of this pull request and resolve the conflicts
git cherry-pick -x --mainline 1 f4b472480390805e0a24de46d9781a665cfb6184
# Push it to GitHub
git push --set-upstream origin backport/backport-2985-to-2.15
# Go back to the original working tree
cd ../..
# Delete the working tree
git worktree remove .worktrees/backport-2.15 Then, create a pull request where the |
The backport to
To backport manually, run these commands in your terminal: # Fetch latest updates from GitHub
git fetch
# Create a new working tree
git worktree add .worktrees/backport-2.16 2.16
# Navigate to the new working tree
cd .worktrees/backport-2.16
# Create a new branch
git switch --create backport/backport-2985-to-2.16
# Cherry-pick the merged commit of this pull request and resolve the conflicts
git cherry-pick -x --mainline 1 f4b472480390805e0a24de46d9781a665cfb6184
# Push it to GitHub
git push --set-upstream origin backport/backport-2985-to-2.16
# Go back to the original working tree
cd ../..
# Delete the working tree
git worktree remove .worktrees/backport-2.16 Then, create a pull request where the |
…2985) Signed-off-by: Mingshi Liu <[email protected]> (cherry picked from commit f4b4724)
…2985) Signed-off-by: Mingshi Liu <[email protected]> (cherry picked from commit f4b4724)
…2985) (#3031) Signed-off-by: Mingshi Liu <[email protected]> (cherry picked from commit f4b4724) Co-authored-by: Mingshi Liu <[email protected]>
…2985) (#3030) Signed-off-by: Mingshi Liu <[email protected]> (cherry picked from commit f4b4724) Co-authored-by: Mingshi Liu <[email protected]>
Description
In ml inference ingest processor, when using jsonpath in input_maps, the json path configuration is set with
always_return_list
, however, in connectors and in ml inference search processors, the configuration is not always return list.Trying to use the standard json path configuration in modelExecutor class, so that ml inference search processors and ingest processors are using the same configuration.
The default config now is returnning the original format of the object,
in the sample doc mentioned in the issue:
for simple object like this sample item index,
, if configuring input_map as {"input": "$.item.text"},
the model input will be in original string format
if configuring input_map as {"input": "item.text"},
the model input will also remain the string format
Related Issues
#2974
Check List
--signoff
.By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.