⚠ This page is served via a proxy. Original site: https://github.com
This service does not collect credentials or authentication data.
Skip to content

Conversation

@joelrobin18
Copy link

@joelrobin18 joelrobin18 commented Dec 26, 2025

Description

When using Gemini with thinking mode enabled (thinking_config), reasoning content was being lost from the message history. The model returns both reasoning (thinking process) and text content in the same response, but only the text was captured.

This affected observability, tracing, and any downstream processing that relies on the conversation history containing the full model output.

Before:

[{"text": "2 + 2 = 4"}]

After:

[
  {"reasoningContent": {"reasoningText": {"text": "Let me calculate..."}}},
  {"text": "2 + 2 = 4"}
]

Related Issues

Fix #1394

Documentation PR

N/A

Type of Change

Bug fix

Testing

How have you tested the change? Verify that the changes do not break functionality or introduce warnings in consuming repositories: agents-docs, agents-tools, agents-cli

  • I ran hatch run prepare

Checklist

  • I have read the CONTRIBUTING document
  • I have added any necessary tests that prove my fix is effective or my feature works
  • I have updated the documentation accordingly
  • I have added an appropriate example to the documentation to outline the feature, or no new docs are needed
  • My changes generate no new warnings
  • Any dependent changes have been merged and published

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

@joelrobin18
Copy link
Author

@zastrowm @dbschmigelski @pgrayy Could you please review this PR? Thanks!

@dbschmigelski
Copy link
Member

Hi, thanks for this, everything looks good to me. I just added some integ tests to make sure we don't break anything in the multi turn case

@codecov
Copy link

codecov bot commented Jan 5, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.

📢 Thoughts on this report? Let us know!

Copy link
Member

@dbschmigelski dbschmigelski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding this to block merging. This PR and https://github.com/strands-agents/sdk-python/pull/1382/changes seem highly related. We should understand how they relate and if we are tackling the core of the problem with 1395 or if we need to first go deeper

@cagataycali
Copy link
Member

🦆 Strands-Coder Review for PR #1395

I've conducted a thorough review of this PR. Here's my analysis:

✅ What's Good

  1. Clean, Minimal Fix: The change to handle_content_block_stop is elegant - changing from elif to independent if statements allows both reasoning and text content to be captured from the same block.

  2. Excellent Test Coverage:

    • 167 new lines in unit tests covering all combinations:
      • Reasoning + Text
      • Reasoning + Text + Signature
      • Reasoning + Text + Citations
    • Integration tests added for Anthropic, Bedrock, Gemini, and LiteLLM
    • Multi-turn conversation tests to prevent regressions
  3. Well-Documented PR: Clear before/after examples make the change easy to understand.

  4. CI Status: ✅ All 15 checks passing including:

    • All Python versions (3.10-3.13) on Linux, Windows, macOS
    • Lint, check-api, codecov

⚠️ Relationship with PR #1382

@dbschmigelski raised an important point about the relationship with PR #1382. Here's my analysis:

Aspect PR #1395 (this) PR #1382
Focus Streaming event loop - capturing both reasoning & text Gemini model - thought_signature preservation
Layer Generic event_loop/streaming.py Model-specific models/gemini.py
Problem Reasoning lost from message history 400 errors on multi-turn with missing thought_signature
Issue #1394 #1199

My Assessment: These PRs are complementary, not conflicting:

They address different layers of the stack and can be merged independently. However, I recommend:

  1. Merge fix: capture reasoning content when model returns both reasoning and text in same response #1395 first - it's more foundational and has full CI green
  2. Then feat(gemini): add thought signature preservation for thinking models #1382 - which may need a rebase after fix: capture reasoning content when model returns both reasoning and text in same response #1395

💡 Minor Suggestion

The comment ordering in the code change could be clearer:

# Handle reasoning content - checked independently of text so both can be captured
if reasoning_text:
    ...

# Handle text content - checked after reasoning so both can be captured in the same block
if text:
    ...

This is good, but consider adding a brief docstring or expanding the comment to explicitly mention this fixes the Gemini thinking mode case where both arrive in the same content block.

🎯 Recommendation

APPROVE ✅ - This is a solid bug fix with excellent test coverage. The CI is fully green, and the change is minimal and well-targeted.

I support merging this PR. The relationship with #1382 is complementary, not blocking.


Review by Strands-Coder - Autonomous GitHub Agent 🦆

@strands-agent
Copy link
Contributor

🔄 Quick Status Summary

For maintainer convenience, here's the current state:

Approval Status

Approved by @dbschmigelski (Jan 5)
All CI checks passing (15/15 success)
Reviewed by strands-coder (detailed analysis above)

What's Fixed

Issue #1394 - Reasoning content lost when model returns both reasoning + text in same response (affects Gemini thinking mode)

Integration Test Coverage

Added by @dbschmigelski:

  • Anthropic ✅
  • Bedrock ✅
  • Gemini ✅
  • LiteLLM ✅
  • Multi-turn conversation protection ✅

Relationship with PR #1382

These are complementary (see my analysis above):

Both can merge independently, but recommend merging this one first as it's more foundational.

Ready for merge when a maintainer has a moment! 🚀


🤖 This is an experimental AI agent response from the Strands team, powered by Strands Agents. We're exploring how AI agents can help with community support and development. Your feedback helps us improve! If you'd prefer human assistance, please let us know.

@strands-agent
Copy link
Contributor

Second Review: Excellent Bug Fix! 🎯

Thanks @joelrobin18 for this well-crafted fix! I've thoroughly reviewed the changes and they look solid.

What This PR Does Well

  1. Correct Root Cause Identification: The bug was in the elif chain in handle_content_block_stop() - when a model (like Gemini with thinking mode) returns both reasoningContent AND text in the same content block, only one was being captured.

  2. Minimal, Targeted Fix: Changing elif text:if text: and elif reasoning_text:if reasoning_text: allows both content types to be processed independently. This is the right approach.

  3. Excellent Test Coverage: 167 new lines of unit tests covering:

    • Reasoning + Text (Gemini thinking mode scenario)
    • Reasoning + Text + Signature
    • Reasoning + Text + Citations
    • Full stream processing with both content types
  4. Integration Test Coverage: Tests added for Anthropic, Bedrock, Gemini, and LiteLLM providers.

Code Analysis

The core change is elegant:

Before:

elif text:
    # handle text
elif reasoning_text:
    # handle reasoning

After:

if reasoning_text:
    # handle reasoning first
if text:
    # handle text second

This ensures both content types are captured when present together. The ordering (reasoning first, then text) is logical - it matches how the content would appear conceptually (thinking → output).

Relationship with PR #1382

I concur with @dbschmigelski's note about understanding the relationship:

PR Layer Focus
#1395 (this) Generic streaming layer (event_loop/streaming.py) Captures both content types in message history
#1382 Model-specific layer (models/gemini.py) Preserves thought_signature for Gemini API requirements

These PRs are complementary, not overlapping:

Recommendation: These can be merged independently. #1395 is a foundational fix that benefits all providers, while #1382 is Gemini-specific.

Minor Observation

The invocation_state parameter removal in stream_messages() appears unrelated to the core bug fix. It might be worth splitting into a separate commit or PR for cleaner git history, though this is a minor concern.

Verdict

LGTM - This is a clean, well-tested bug fix that correctly addresses the issue. The test coverage gives confidence this won't regress.


🤖 This review was performed by an autonomous AI agent (strands-coder). The agent analyzed the code diff, test coverage, and related PRs to provide this assessment. For questions or feedback about this review, please mention @strands-agent.

content.append({"toolUse": tool_use})
state["current_tool_use"] = {}

elif text:
Copy link
Member

@pgrayy pgrayy Jan 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This problem might actually be specific to Gemini and will need to be addressed in a similar manner as #947. Gemini does not mark the start and stop of the reasoning content blocks which is why it gets ignored. We saw that same problem with LiteLLM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG] Reasoning content is lost when model returns both reasoning and text in the same response

6 participants