You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I’ve been encountering a couple of core functionality issues with the Agno Framework, and this one is among them.
session_id = "test_session_004"
user_id = "test_user_001"
query = input("Enter:")
agent = Agent(
model= AIModel.get_model(), # GPT-OSS Model
db=AgnoDB.get_storage(), # MongoDB
description="You are an assistant. Respond in very brief sentences. Mostly in one line",
user_id=user_id,
session_id=session_id,
num_history_runs=1,
enable_session_summaries=True
)
workflow = Workflow(
user_id=user_id,
session_id=session_id,
steps=[Step(agent=agent)],
add_workflow_history_to_steps=True,
db=AgnoDB.get_storage()
)
response = workflow.run(query)
print(f"BOT: {response.content}")
DEBUG (1st Message)
DEBUG ****** Agent ID: 28c0ab95-1626-4f88-9ed8-964b0826ed48 ******
DEBUG Creating new AgentSession: test_session_004
DEBUG ** Agent Run Start: 3c671a9c-191c-45e3-af98-1b177be31f57 ***
DEBUG ------------------ OpenAI Response Start -------------------
DEBUG ---------------- Model: openai/gpt-oss-120b ----------------
DEBUG ========================== system ==========================
DEBUG You are an assistant. Respond in very brief sentences. Mostly in one line
DEBUG =========================== user ===========================
DEBUG My name is Rafique
DEBUG ======================== assistant =========================
DEBUG <reasoning>
The user says "My name is Rafique". The system says respond in very brief
sentences, mostly one line. Probably acknowledge.
</reasoning>
DEBUG Nice to meet you, Rafique.
DEBUG ************************ METRICS *************************
DEBUG * Tokens: input=93, output=44, total=137
DEBUG * Duration: 1.0153s
DEBUG * Tokens per second: 43.3351 tokens/s
DEBUG ************************ METRICS *************************
DEBUG ------------------- OpenAI Response End --------------------
DEBUG Added RunOutput to Agent Session
DEBUG ***************** Creating session summary *****************
DEBUG ------------------ OpenAI Response Start -------------------
DEBUG ---------------- Model: openai/gpt-oss-120b ----------------
DEBUG ========================== system ==========================
DEBUG Analyze the following conversation between a user and an assistant, and
extract the following details:
- Summary (str): Provide a concise summary of the session, focusing on
important information that would be helpful for future interactions.
- Topics (Optional[List[str]]): List the topics discussed in the session.
Keep the summary concise and to the point. Only include relevant
information.
<conversation>User: My name is Rafique
Assistant: Nice to meet you, Rafique.
</conversation>
DEBUG =========================== user ===========================
DEBUG Provide the summary of the conversation.
DEBUG ======================== assistant =========================
DEBUG <reasoning>
We have a system prompt to "Analyze the following conversation between a
user and an assistant...". The user asks "Provide the summary of the
conversation." We must output JSON? The instruction says: "Provide a
concise summary of the session, focusing on important information... Keep
the summary concise and to the point. Only include relevant information."
It expects a structure:
- Summary (str)
- Topics (Optional[List[str]])
Probably JSON format. Many prior similar tasks require output like:
{
"Summary": "...",
"Topics": [...]
}
Thus we give summary: "User introduced themselves as Rafique, assistant
greeted them." Topics: maybe ["self-introduction", "greeting"]. Provide
concise.
Let's output JSON.
</reasoning>
DEBUG {
"summary": "User introduced themselves as Rafique; the assistant
responded with a greeting.",
"topics": ["self-introduction", "greeting"]
}
DEBUG ************************ METRICS *************************
DEBUG * Tokens: input=175, output=190, total=365
DEBUG * Duration: 1.5015s
DEBUG * Tokens per second: 126.5435 tokens/s
DEBUG ************************ METRICS *************************
DEBUG ------------------- OpenAI Response End --------------------
DEBUG ***************** Session summary created ******************
DEBUG Added RunOutput to Agent Session
DEBUG *** Agent Run End: 3c671a9c-191c-45e3-af98-1b177be31f57 ****
BOT: Nice to meet you, Rafique.
DEBUG (Last Message)
DEBUG ****** Agent ID: 451c5146-f659-431f-b5e0-3f094b2e02d3 ******
DEBUG Creating new AgentSession: test_session_004
DEBUG ** Agent Run Start: 48df1b0c-d006-4578-a7e7-a07d7416648f ***
DEBUG ------------------ OpenAI Response Start -------------------
DEBUG ---------------- Model: openai/gpt-oss-120b ----------------
DEBUG ========================== system ==========================
DEBUG You are an assistant. Respond in very brief sentences. Mostly in one line
DEBUG =========================== user ===========================
DEBUG What's my name?
DEBUG ======================== assistant =========================
DEBUG <reasoning>
We can't answer because we don't know name; must be brief.
</reasoning>
DEBUG I don’t have that information.
DEBUG ************************ METRICS *************************
DEBUG * Tokens: input=92, output=30, total=122
DEBUG * Duration: 0.8022s
DEBUG * Tokens per second: 37.3979 tokens/s
DEBUG ************************ METRICS *************************
DEBUG ------------------- OpenAI Response End --------------------
DEBUG Added RunOutput to Agent Session
DEBUG ***************** Creating session summary *****************
DEBUG ------------------ OpenAI Response Start -------------------
DEBUG ---------------- Model: openai/gpt-oss-120b ----------------
DEBUG ========================== system ==========================
DEBUG Analyze the following conversation between a user and an assistant, and
extract the following details:
- Summary (str): Provide a concise summary of the session, focusing on
important information that would be helpful for future interactions.
- Topics (Optional[List[str]]): List the topics discussed in the session.
Keep the summary concise and to the point. Only include relevant
information.
<conversation>User: What's my name?
Assistant: I don’t have that information.
</conversation>
DEBUG =========================== user ===========================
DEBUG Provide the summary of the conversation.
DEBUG ======================== assistant =========================
DEBUG <reasoning>
The user asks to provide summary of conversation. We need to output JSON
with fields Summary and Topics optional. The convo: User asked "What's my
name?" Assistant responded "I don’t have that information." So summary:
user asked for their name, assistant said doesn't have that info. Topics:
maybe "personal identification" or "name". Provide concise.
</reasoning>
DEBUG {
"summary": "The user asked for their name, and the assistant responded
that it does not have that information.",
"topics": ["personal identification", "name inquiry"]
}
DEBUG ************************ METRICS *************************
DEBUG * Tokens: input=172, output=119, total=291
DEBUG * Duration: 1.1865s
DEBUG * Tokens per second: 100.2932 tokens/s
DEBUG ************************ METRICS *************************
DEBUG ------------------- OpenAI Response End --------------------
DEBUG ***************** Session summary created ******************
DEBUG Added RunOutput to Agent Session
DEBUG *** Agent Run End: 48df1b0c-d006-4578-a7e7-a07d7416648f ****
BOT: I don’t have that information.
SCREENSHOT:
Without Workflow:
With Workflow:
ISSUE Description:
The session summaries work fine when used with a standalone agent. However, when we try the same within a workflow, the summarization doesn’t get updated in the database — even though the LLM performs the summarization (verified through debug logs).
This is critical because, in a workflow, summarization serves as the backbone for maintaining continuity and deriving context. Without it, we cannot inject large chat histories to retain the initial entity context.
Note: This use case is different from user memory. I want the workflow itself to retain awareness of entity context.
Any ideas or workarounds?
PS: Documentations are good but not good enough. You should really update the documentation to cover the role of attributes and usage. For instance, I never knew that i have to enable add_workflow_history_to_steps to enable the history for the "Agents", i thought add_history_to_context is enough to take care of it as i don't need history for all the steps.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I’ve been encountering a couple of core functionality issues with the Agno Framework, and this one is among them.
DEBUG (1st Message)
DEBUG (Last Message)
SCREENSHOT:

Without Workflow:
With Workflow:

ISSUE Description:
The session summaries work fine when used with a standalone agent. However, when we try the same within a workflow, the summarization doesn’t get updated in the database — even though the LLM performs the summarization (verified through debug logs).
This is critical because, in a workflow, summarization serves as the backbone for maintaining continuity and deriving context. Without it, we cannot inject large chat histories to retain the initial entity context.
Note: This use case is different from user memory. I want the workflow itself to retain awareness of entity context.
Any ideas or workarounds?
PS: Documentations are good but not good enough. You should really update the documentation to cover the role of attributes and usage. For instance, I never knew that i have to enable
add_workflow_history_to_stepsto enable the history for the "Agents", i thoughtadd_history_to_contextis enough to take care of it as i don't need history for all the steps.Beta Was this translation helpful? Give feedback.
All reactions