Replies: 1 comment
-
|
This discussion was automatically closed because the community moved to community.vercel.com/ai-sdk |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Acknowledgement
Question
I'm building a chat application using:
aipackage)@ai-sdk/openaistreamTextfor streaming responsesuseChathook on the frontend withDefaultChatTransportThe Problem
When conversations get very long (150+ messages), we need to:
What I've Tried
I found that
prepareStepcallback ingenerateText/streamTextis the recommended place to implement this (Discussion #8192).The pattern would be:
The Issue
The problem is that the compaction would re-trigger on every subsequent request because:
prepareStepsees 150+ messages → compactsMy Proposed Solution
Track compaction state on the frontend:
Then in the API:
Questions
Is this the right approach? Or is there a built-in way to handle this in AI SDK v5?
Does
prepareStepwork withstreamText? The docs mainly showgenerateTextexamples.How do others handle this? Is there a common pattern for:
Alternative: Should the frontend slice messages before sending? Instead of sending all messages and letting
prepareStephandle it, should the frontend only sendmessages.slice(compactedUpToIndex)along with the summary?Environment
{ "ai": "^5.x", "@ai-sdk/openai": "^1.x", "next": "15.5.2" }Related
Any guidance would be appreciated! 🙏
Beta Was this translation helpful? Give feedback.
All reactions