Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

merge main into dev #6578

Merged
merged 2 commits into from
Jan 20, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 15 additions & 16 deletions docs/source/routing/performance/caching/entity.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -216,16 +216,15 @@ This entry contains an object with the `all` field to affect all subgraph reques
You can invalidate entity cache entries with a [specifically formatted request](#invalidation-request-format once you [configure your router](#configuration) appropriately. For example, if price data changes before a price entity's TTL expires, you can send an invalidation request.

```mermaid

flowchart RL
subgraph QueryResponse["Cache invalidation POST"]
n1["{
  "kind": "subgraph",
  "subgraph": "price",
  "type": "Price",
  "key": {
    "id": "101"
  }
    "kind": "subgraph",
    "subgraph": "price",
    "type": "Price",
    "key": {
        "id": "101"
    }
}"]
end

Expand All @@ -236,18 +235,18 @@ flowchart RL
end

subgraph PriceQueryFragment["Price Query Fragment (e.g. TTL 2200)"]
n2[" ̶{̶
  " ̶p̶r̶i̶c̶e̶": ̶{̶
    " ̶i̶d̶": ̶1̶0̶1̶,
    " ̶p̶r̶o̶d̶u̶c̶t̶_̶i̶d̶": ̶1̶2̶,
    " ̶a̶m̶o̶u̶n̶t̶": ̶1̶5̶0̶0̶,
    "̶c̶u̶r̶r̶e̶n̶c̶y̶_̶c̶o̶d̶e̶": " ̶U̶S̶D̶"
   ̶}̶
̶}̶"]
n2["<del>{
&nbsp;&nbsp;&nbsp;&nbsp;&quot;price&quot;: {
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&quot;id&quot;: 101,
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&quot;product_id&quot;: 12,
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&quot;amount&quot;: 1500,
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&quot;currency_code&quot;: &quot;USD&quot;
&nbsp;&nbsp;&nbsp;&nbsp;}
}</del>"]
end

Router
Database[("&emsp;&emsp;&emsp;")]
Database[("&nbsp;&nbsp;&nbsp;&nbsp;")]

QueryResponse --> Router
Purchases --> Router
Expand Down
15 changes: 1 addition & 14 deletions docs/source/routing/performance/caching/in-memory.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ supergraph:

### Cache warm-up

When loading a new schema, a query plan might change for some queries, so cached query plans cannot be reused.
When loading a new schema, a query plan might change for some queries, so cached query plans cannot be reused.

To prevent increased latency upon query plan cache invalidation, the router precomputes query plans for the most used queries from the cache when a new schema is loaded.

Expand Down Expand Up @@ -80,19 +80,6 @@ then look at `apollo_router_schema_loading_time` and `apollo.router.query_planni

If the router is using distributed caching for query plans, the warm-up phase will also store the new query plans in Redis. Since all Router instances might have the same distributions of queries in their in-memory cache, the list of queries is shuffled before warm-up, so each Router instance can plan queries in a different order and share their results through the cache.

#### Schema aware query hashing

The query plan cache key uses a hashing algorithm specifically designed for GraphQL queries, using the schema. If a schema update does not affect a query (example: a field was added), then the query hash will stay the same. The query plan cache can use that key during warm up to check if a cached entry can be reused instead of planning it again.

It can be activated through this option:

```yaml title="router.yaml"
supergraph:
query_planning:
warmed_up_queries: 100
experimental_reuse_query_plans: true
```

## Caching automatic persisted queries (APQ)

[Automatic Persisted Queries (**APQ**)](/apollo-server/performance/apq/) enable GraphQL clients to send a server the _hash_ of their query string, _instead of_ sending the query string itself. When query strings are very large, this can significantly reduce network usage.
Expand Down
Loading