You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi everyone, I just put in a little effort profiling a run of Example 1 through ARAX. This is just once instance of course https://arax.ci.transltr.io/?r=293759
Here's my analysis:
0.15s Launch, setup, and start launching queries to KPs and begin waiting for responses. wow, fast!
------- 0.3s since KP request: MolePro responds already!
------- 0.8s since KP request: RTX-KG2 responds, nice!
----- <1.0s 8 other KPs are queried and respond with no edges, but do so in less than a second
------- 6.5s since KP request: Service Provider lumbers across the finish line panting heavily
------- 24.5s since KP request: SPOKE finally limps across the finish line
0.5s Add NGD edges to the graph
0.6s Remove general concepts from the knowledge graph
<0.1s Other steps like Resultify seem negligible
0.3s Writing the Response to the S3 bucket
26.6s: Total processing time from receipt to begin streaming Response
24.5s: Time spent waiting for KPs to respond: MolePro, RTX-KG2 are sub-second. Service provider slowish. SPOKE a turtle
2.1s: Other processing of data
Two local processing steps appear to stand out:
Computing NGD edges. 0.5 seconds seems pretty reasonable, but could it be 0.05 seconds?
Removing general concepts: 0.6 second is okay, but this seems like our slowest general processing step. Could this be 0.06 seconds?
Conclusion: How could we be faster?
We could timeout our KPs faster. I think I overheard that Aragorn times out their KPs at 10s
We could remove general concepts faster? My sense is that that could be a lot faster, knowing nothing about what's actually happening here.
We could cache the whole initial query. If this same exact query has been done before very recently, why do it again?
We could cache KP queries/results. If we sent an exact same query to a KP very recently, why do it again?
The text was updated successfully, but these errors were encountered:
Hi everyone, I just put in a little effort profiling a run of Example 1 through ARAX. This is just once instance of course
https://arax.ci.transltr.io/?r=293759
Here's my analysis:
0.15s Launch, setup, and start launching queries to KPs and begin waiting for responses. wow, fast!
------- 0.3s since KP request: MolePro responds already!
------- 0.8s since KP request: RTX-KG2 responds, nice!
----- <1.0s 8 other KPs are queried and respond with no edges, but do so in less than a second
------- 6.5s since KP request: Service Provider lumbers across the finish line panting heavily
------- 24.5s since KP request: SPOKE finally limps across the finish line
0.5s Add NGD edges to the graph
0.6s Remove general concepts from the knowledge graph
<0.1s Other steps like Resultify seem negligible
0.3s Writing the Response to the S3 bucket
26.6s: Total processing time from receipt to begin streaming Response
24.5s: Time spent waiting for KPs to respond: MolePro, RTX-KG2 are sub-second. Service provider slowish. SPOKE a turtle
2.1s: Other processing of data
Two local processing steps appear to stand out:
Conclusion: How could we be faster?
The text was updated successfully, but these errors were encountered: